opensource.com

Subscribe to opensource.com feed
Updated: 21 hours 7 min ago

3 surprising things Linux sysadmins can do with systemd

Thu, 03/23/2023 - 15:00
3 surprising things Linux sysadmins can do with systemd alansmithee Thu, 03/23/2023 - 03:00

When it first started out, there was a lot of press about systemd and its ability to speed up boot time. That feature had a mostly-universal appeal (it's less important to those who don't reboot), so in many ways, that's the reputation it still has today. And while it's true that systemd is the thing that launches services in parallel during startup, there's a lot more to it than that. Here are three things you may not have realized systemd could do but should be taking advantage. Get more tips from our new downloadable eBook, A pragmatic guide to systemd.

1. Simplify Linux ps

If you've ever used the ps or even just the top command, then you know that your computer is running hundreds of processes at any given moment. Sometimes, that's exactly the kind of information you need in order to understand what your computer, or its users, are up to. Other times, all you really need is a general overview.

The systemd-cgtop command provides a simple view of your computer's load based on the cgroups (control groups) tasks have been arranged into. Control groups are important to modern Linux, and are essentially the support structures underneath containers and Kubernetes (which in turn are why the cloud scales the way it does), but also they're useful constructs on your home PC. For instance, from the output of systemd-cgtop, you can see the load of your user processes as opposed to system processes:

Control Group Proc+ %CPU Memory Input/s Output/s / 183 5.0 1.6G 0B 3.0M user.slice 4 2.8 1.1G 0B 174.7K user.slice/user-1000.slice 4 2.8 968.2M 0B 174.7K system.slice 65 2.2 1.5G 0B 2.8M

You can also view just your userspace processes, or just your userspace processes and kernel threads.

This isn't a replacement for top or ps by any means, but it's an additional view into your system from a different and unique angle. And it can be vital when running containers, because containers use cgroups.

2. Linux cron

Cron is a classic component of Linux. When you want to schedule something to happen on a regular basis, you use cron. It's reliable and pretty well integrated into your system.

The problem is, cron doesn't understand that some computers get shut down. If you have a cronjob scheduled for midnight, but you turn your computer off at 23:59 every day, then your cronjob never runs. There's no facility for cron to detect that there was a missed job overnight.

As an answer to that problem, there's the excellent anacron, but that's not quite as integrated as cron. There's a lot of setup you have to do to get anacron running.

A second alternative is systemd timers. Like cron, it's already built in and ready to go. You have to write a unit file, which is definitely more lines than a one-line crontab entry, but it's also pretty simple. For instance, here's a unit file to run an imaginary backup script 30 minutes after startup, but only once a day. This ensures that my computer gets backed up, and prevents it from trying to backup more than once daily.

[Unit] Description=Backup Requires=myBackup.service [Timer] OnBootSec=30min OnUnitActiveSec=1d [Install] WantedBy=timers.target

You can, of course, intervene and prompt a job to run with . Thanks to the OnUnitActiveSec directive, systemd doesn't attempt to run a job you've manually activated.

Linux Containers What are Linux containers? What is Kubernetes? Free online course: Deploy containerized applications eBook: A guide to Kubernetes for SREs and sysadmins Free online course: Running containers with Red Hat technical overview Podman cheat sheet The latest articles on Linux containers 3. Run Linux containers

Containers make starting up a complex service really easy. You can run a Mattermost or Discourse server in mere minutes. The hard part, in some cases, is managing and monitoring the containers once you have them running. Podman makes it easy to manage them, but what do use to manage Podman? Well, you can use systemd.

Podman has a built-in command to generate unit files so your containers can be managed and monitored by systemd:

$ podman generate systemd --new --files --name example_pod

All you have to do then is start the service:

$ systemctl --user start pod-example_pod.service

As with any other service on your computer, systemd ensures that your pod runs no matter what. It logs problems, which you can view with journalctl along with your other essential logs, and you can monitor its activity within cgroups using systemd-cgtop.

It's no Kubernetes platform, but for one or two containers that you just want to have available on a reliable and predictable basis, Podman and systemd are an amazing pair.

Download the systemd eBook

There's a lot more to systemd, and you can learn the basics, along with lots of useful and pragmatic tips, from author David Both in his new complimentary pragmatic guide to systemd.

It's not just for making your computer boot faster. Download our new systemd eBook for Linux sysadmins for more tips.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Sysadmin Linux Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to encourage positive online communication in your open source community

Thu, 03/23/2023 - 15:00
How to encourage positive online communication in your open source community ultimike Thu, 03/23/2023 - 03:00

Threaded online conversations are a relatively new form of communication that can improve knowledge transfer and availability, but they can also stray from the original intent. Online technical conversations in open source communities using Slack or one of the several open source alternatives experience these benefits and drawbacks.

Say a community member posts a question or shares an idea to start a conversation. As in any conversation, sometimes things can get off track. While not all diversions from the prompt are unhelpful, there are times when a comment can be unproductive—and sometimes even hurtful.

The Drupal community is like most other open source communities, in that we have many online conversations happening at any given time, in a variety of places. Sometimes, when a community member flags an online comment as hurtful, the Drupal Community Working Group (CWG) is asked to step in and mediate the situation. The CWG is responsible for maintaining the health of the community. Often, the solution is as simple as reminding the author of the comment of the Code of Conduct.

In 2020, the CWG began looking into how they could crowdsource this activity in a way that would be predictable and non-confrontational. The group decided to author several nudges: prewritten, formatted responses that community members could copy and paste into an online conversation to get conversations back on track.

The Drupal community currently has five different nudges depending on the situation. It is up to community members to select one from this list:

  • Inclusive language, gendered terms
  • Inclusive language, ableist terms
  • Gatekeeping knowledge
  • Cultural differences
  • Escalating emotions

For example, the inclusive language, ableist terms nudge contains this message:

This discussion appears to include the use of ableist language in a comment. Ableist language can be harmful to our community because it can devalue challenges experienced by people with disabilities.

For more information, please refer to Drupal’s Values and Principles about treating each other with dignity and respect.

This comment is provided as a service (currently being tested) of the Drupal Community Health Team as part of a project to encourage all participants to engage in positive discourse. For more information, please visit https://www.drupal.org/project/drupal_cwg/issues/3129687

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

Currently, using one of the nudges is a manual copy-paste process, but the group is discussing the possibility of providing tools for easier use. We provide both formatted (for forum and issue queues) and unformatted (Slack) versions of each nudge. The CWG is also working on adding a sixth nudge for unhelpful or inauthentic comments. This nudge is aimed at discouraging users who add comments to a thread solely to gain a contribution credit on the issue.

Over the past two years that nudges have been available, the CWG has not fielded any complaints related to their use. While the number of conflicts between community members escalated to the CWG has declined during this period, it is difficult to attribute this solely to nudges. Other efforts have been made to improve community health (not to mention outside factors). Nevertheless, the CWG feels that nudges have been a net positive to the community and continues to access, improve, and encourage their use. In a blog post to the community announcing their general availability, the CWG wrote:

To continue to grow a healthy community, we all must work under the assumption that no one intentionally uses language to hurt others. Even so, despite our best efforts we sometimes still use words or phrases that are discouraging, harmful, or offensive to others. We are all human beings who make mistakes, but as members of a shared community, it's our responsibility to lift each other up and encourage the best in each other.

Prewritten nudges for various situations are useful prompts for members of any community to keep conversations productive and encouraging—and do so in a friendly way!

The Drupal community uses nudges to keep conversations productive and inclusive.

Drupal Community management Diversity and inclusion What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 203 points Ipswich, UK

Ruth has been a keen advocate of Open Source for over 18 years.

As a contributor to the Joomla! and Mautic community, she volunteered on the Joomla! Community Leadership Team for over three years, and currently works as Project Lead for Mautic, the world's first open source marketing automation platform, at Acquia.

Ruth is a keen runner, and lives with a condition called Ehlers-Danlos Syndrome which means that she sometimes needs to use a wheelchair or walking aids.

| Follow RCheesley Open Minded Author Linux Community Manager Geek Joomla Contributor Club 931 points San Francisco Bay Area

AmyJune is an experienced community manager, mentor, public speaker, and inclusion advocate. While her roots are in Drupal, she also contributes regularly to the Linux and Accessibility communities. With a dual focus on both open-source community development and inclusivity, she is uniquely positioned to help individuals become more comfortable and confident as they contribute to their communities.

AmyJune lives adjacent to the San Francisco Bay Area in the rural agricultural hub of San Benito County. Having two grown children who survived their teenage years, and being the youngest of 5 sisters, AmyJune is either very lucky to have made it this far in life or is very talented at talking herself out of deadly situations.

Outside of her mission in the open source community space, she has a deep love for mycology, geocaching, and air-cooled Volkswagens.

| Follow volkswagenchick | Connect volkswagenchick User Attributes Team Open Source Evangelist Author Contributor Club Register or Login to post a comment.

8 steps to refurbish an old computer with Linux

Wed, 03/22/2023 - 15:00
8 steps to refurbish an old computer with Linux howtech Wed, 03/22/2023 - 03:00

We live in a remarkable era. It wasn't so long ago we were all chained to the "upgrade treadmill," forced to buy expensive new personal computers every few years.

Today, with the benefit of open source software, you can break out of that cycle. One way is to refurbish old computers and keep them in service. This article tells you how.

1. Grab an old PC

Maybe you have an old computer lying unused in the basement or garage. Why not put it to use?

Or you can get an old machine from a friend, family member, or Craigslist ad. Many electronics recycling centers will let you poke around and take a discarded machine if it fits your fancy. Be sure to grab more than one if you can, as you may need parts from a couple abandoned PCs to build one good one.

Look at the stickers on the front of the machines to make sure you're selecting good refurbishing candidates. Items with Window 7 and 8 logos run Linux quite well. Extended support ended for 8.1 this January, so I'm seeing a lot of those getting dumped.

Many of these Windows computers offer perfectly good hardware. They're only being trashed due to planned obsolescence because they can't run Windows 11. They run open source software just fine.

2. Identify and clean everything

Before you open up your "new" machine to see what you've got, be sure to ground yourself by touching something metal. Even a shock so slight you don't feel it can destroy delicate circuitry.

You'll instantly see if any parts are missing. Many people take out their disks or sometimes the memory before recycling a computer. You'll either have to acquire more than a single box to cover this, or you'll need to buy a part or two to make it whole.

Before proceeding further, it's important to give the machine a thorough cleaning. Pay special attention to the CPU complex, the fans, and all surfaces. Remember that you can't rub electronics without risking damage, so use compressed air for cleaning.

3. Ensure all hardware works

You'll want to verify that all hardware works prior to installing any software. Don't skimp on the testing! It's a huge waste of your time if you find out, for example, that your computer has a transient memory error at a later time because you ran only a short ram test before going to next steps. I find it convenient to run time-consuming tests overnight.

Most computers have hardware-specific diagnostics built in. You usually access these either through the boot-time UEFI/BIOS panels or by pressing a PF key while booting. If your machine doesn't include testing tools, try Ultimate Boot Disk, which provides tons of useful testing utilities.

Be sure you test all components thoroughly:

  1. Memory
  2. Disk
  3. CPU and Motherboard
  4. Peripherals (USB ports, sound, microphone, keyboard, display, fans, etc)

If you find problems, download my free Quick Guide to Fixing Hardware. That plus some searching online enables you to fix just about anything.

4. Prepare the disk

You've assessed your hardware and have gotten it into good working order. If your computer came with a hard disk drive (HDD), the next step is to ready that for use.

You need to completely wipe the disk because it could contain illegally obtained movies, music, or software. To thoroughly wipe an HDD, run a tool like DBAN. After running that, you can rest assured the disk is completely clean.

If you have a solid state disk (SSD), the situation is a bit trickier. Disk-wipe programs designed to cleanse hard disks don't work with SSDs. You need a specialized secure erase program for an SSD.

Some computers come with an secure erase utility in their UEFI/BIOS. All you have to do is access the boot configuration panels to run it.

The other option is the website of the disk manufacturer. Many offer free downloads for secure erase utilities for their SSDs.

Unfortunately, some vendors don't provide a secure erase utility for some of their consumer drives, while others supply only a Windows executable. For an SSD, Parted Magic's secure erase function is the best option.

5. Booting, data storage, and backups

Your disk strategy for your refurbished computer must address three needs: booting, data storage, and backups.

A few years ago, if your refurbishing candidate contained a disk, it was always a hard drive. You'd wipe it with DBAN, then install your favorite Linux distribution, and use it as both your boot and storage device. Problem solved.

Today's technology offers better options. These eliminate the slow hard disk access that was previously one of the downsides of using older equipment.

One option is to buy one of the new low-end SSDs that have become available. These now offer the SATA and external USB interfaces that work with mature computers.

Prices have plummeted. I recently bought a 480 gig SSD/SATA drive for $25. That's so inexpensive that, even if your old computer came with a hard drive included, you might prefer to buy a new SSD anyway. It boots and accesses data so much faster.

The lightweight 2.5" SSDs also solve the mounting dilemmas one sometimes faced with old desktops. With a single screw you can attach them almost anywhere. No more messing with rails, cages, and all the other goofy proprietary parts companies used to mount their heavy 3.5" hard drives.

An alternative to an SSD is to boot off a USB memory stick. Thumb drives now offer enough space to host any operating system you prefer, while leaving some storage space for your data. Beyond speed, you gain flexibility by keeping your system on a portable device.

So consider installing your operating system to a fast SSD or USB and booting and running it from that.

What about other drives? I like to use any hard drive that came with the computer as a backup disk for my boot SSD. Or employ it as mass storage.

I usually remove the optical drives you find in old desktops. Since USB sticks are faster and hold more data, few people use them anymore. Most now stream their films, music, and software programs instead of collecting them on optical media.

Removing the optical drive frees up an extra set of disk connectors. It also opens up lots of space in the cabinet and improves air flow. This can make a big difference if you're dealing with small footprint desktops with slimline or mini-tower cases.

Finally, take a few minutes to decide on your backup strategy. You'll need to back up two separate things: your data and the operating system.

Will you back up to a second drive inside the PC, a detachable storage device, or cloud services? Your decision helps determine whether you'll need a second disk in your refurbished computer.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 6. Select and install software

Different people have different needs that drive their software selection. Here are some general guidelines.

If your computer has an Intel i-series processor and at least 4 GB of memory, it can comfortably run nearly any Linux distribution with any desktop environment (DE).

With between two and four gigabytes of memory, install a Linux with a lightweight interface. This is because high-end display graphics is a big consumer of memory resources. I've found that Linux distros with a DE like XFCE, LXDE, and LXQt work well.

If you only have a gigabyte of memory, go for an "ultra-light" Linux distribution. This should probably also be your choice if you have an old dual-core CPU or equivalent.

I've used both Puppy Linux and AntiX with great results on such minimal hardware. Both employ lightweight windows managers for their user interface instead of full desktop environments. And both come bundled with apps selected specifically to minimize resource use.

7. Browse the web efficiently

Web pages have grown dramatically in the past five years. Over half the computer resource many popular websites require is now consumed by advertisements and trackers. So when web surfing, block all those ads and trackers. If you can off-load ad blocking from your browser to your VPN, that's ideal. And don't let those auto-run videos run without your explicit permission.

Look around to see what browser works best for your equipment. Some are designed with a multi-threading philosophy, which is great if your PC can support it. Others try to minimize overall resource usage. Many people aren't aware that there are quite a few capable yet minimalist Linux browsers available. In the end, pick the browser that best matches both your equipment and your web surfing style.

8. Have fun

Whether you want to make use of an old computer sitting in your basement, help the environment by extending the computer life cycle, or just find a free computer, refurbishing is a worthy goal.

Anyone can succeed at this. Beyond investing your time, the cost is minimal. You're sure to learn a bit while having fun along the way. Please share your own refurbishing tips with everyone in the comments section.

A step-by-step guide to refurbishing an old computer to keep it in service.

Image by:

Opensource.com

Linux Hardware Sustainability What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why your open source project needs a content strategy

Wed, 03/22/2023 - 15:00
Why your open source project needs a content strategy emilyo Wed, 03/22/2023 - 03:00

If you search for content strategy in your favorite search engine, I bet that you find that it is a term more strongly associated with marketing content than with technical content. However, a technical content strategy is a powerful way to align stakeholders around content goals for your open source project. In this article, I explore the benefits of technical content strategy and how having one can improve the user and contributor experience of your community projects.

When developing a content strategy, you should consider your goals. The goals differ depending on the user. For the marketing team, the goal of content strategy is to attract and connect with existing and potential customers by using content. Marketing content strategists aim to engage customers and develop relationships with the brand.

The goal of technical content strategists is to guide users with technical content that helps them achieve their goals. It should provide them with just enough information to successfully complete their task.

Creating a content strategy

So how do you create a content strategy that helps you achieve your goal? You can do this by having someone on your project take the role of content strategist. Their task is to document what user content is created, where it is published, how users can find it, and how it can be maintained, published, and retired. The content strategy should be available where contributors can find it easily.

Content types and publication locations

The first step to creating content is to get to know the project's audience. Identifying users is best done with all project stakeholders contributing, so there is a shared understanding of who the users are and what their goals are. A tip for open source content strategies is to consider your contributor personas as well as your end-user consumer personas.

A good content strategy is grounded in meeting the user's needs. The project's content should not tell users everything the content creator knows about something. The content should tell the user just enough to complete a task. When the personas are identified and documented, the strategist considers what types of content help these personas be successful. For example, can the user needs be met completely with microcopy in the user interface, or do they need more detailed documentation? Is the contributor onboarding workflow best demonstrated in a video or a blog with screenshots?

While considering what content types to create, the strategist also looks at where the content should be published so your personas can easily find it. The strategist needs to consider how content creators should progressively disclose information if it is not possible to keep the user in their context. For example, if the user is struggling to understand a log file, you can link them to more information on the project's documentation website.

The strategy should give guidance to help decisions about what types of content might best solve the user's problem. The content creator should be challenged to ask themselves what content type best meets the user's needs in the moment. Do they need a new documentation article on the website? Could the user friction point be avoided with a clear error or log message, a better UI label, or other content type? You should make clear that sometimes the answer to a problem isn't always to create more content.

Content reviews and retirement

Now that you have a strategy for what types of content you want and where to publish them, you need to consider governance. The first part of this process is to decide what types of reviews your content requires before publishing. For example, does it require a content plan review, subject matter expert review, editorial review, peer author reviews, or copy reviews. You should also decide how reviews and approvals are tracked.

The second aspect of governance is to decide on a schedule for retirement or archival of content. The strategist should document how content is reviewed for retirement in the future. You should decide if content needs to be retired annually or before every new version release. You should also consider if the content needs to be accessible in some format for users using older versions.

If you are creating a content strategy for an existing project, the chances are high that your project already has some content. As part of the creation process, the content strategist should audit this content, and consider if it is still current and useful. If it is out of date, it should be retired or archived.

A content strategy is beneficial for everyone

Now that you have a content strategy for your project, you should see how it benefits your users, contributors, and your project as a whole.

Project end users

At the heart of the content strategy is the audience. The strategy is centered on the personas interacting with the project. It considers how you can provide them with easily findable information in a consumable format that helps them complete their goals. End users benefit from a content experience that is built around their needs. It should also be self-service so they can solve problems independently.

Contributors

Content consumers, just like end users, benefit from self-service content. New contributors to the project benefit from content designed to onboard them to the project quickly and with ease. The experienced contributor persona gets content that helps them learn about new features of the project. They can also get help with more technically challenging areas. Contributor personas benefit from having accessible reference information. This information can describe the interfaces and features that are available to them to use, build on, and use to interact with the product or service.

The contributors to your project are also the people creating the content that your users consume. Content strategy can help them to understand and feel empathy for user personas, their goals, and use cases. Giving contributors a common understanding of the user's content needs and the types of content that satisfies them supports the creation of a consistent content experience.

Creating a strategy helps all content creators easily understand and align with the content vision. It keeps them focused on creating high-value content that reduces user friction.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Project

In an ideal world, your project would have all the resources needed to create the ideal content experience for your users as envisioned in your strategy. Unfortunately, we live in the real world with conflicting priorities and resource-constrained projects. The good news is that a user-centered content strategy gives the team a shared vision of the content experience. This strategy helps build a content foundation that the project can iterate with each release. It also helps the team make more informed decisions about content.

Your project also benefits from accessible documentation that better serves your users. Your content experience helps users recognize and realize the value of what you have created.

Implement a content strategy

Your content strategy should be a living artifact, guiding content decisions for the project. With this in mind, it should be revisited frequently and tweaked to reflect what is working or not working for your users. Keeping it current enhances your content experience and improves its effectiveness in guiding your users to success.

I believe that the practice of content strategy should be more widely adopted in the technical world as it is a powerful tool. It can help you create a better experience for all of your users. The experience should consider each user's needs, workflow, pain points, and emotions. This helps projects deliver the right content in the right place at the right time.

Explore the benefits of technical content strategy and how having one can improve the user and contributor experience of your open source community projects.

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A 5-minute tour of the Fediverse

Tue, 03/21/2023 - 15:00
A 5-minute tour of the Fediverse murph Tue, 03/21/2023 - 03:00

People want to communicate over the internet as easily as they do in real life, with similar protections but, potentially, farther reach. In other words, people want to be able to chat with a group of other people who aren't physically in the same location, and still maintain some control over who claims ownership of the conversation. In today's world, of course, a lot of companies have a lot to say about who owns the data you send back and forth over the world wide web. Most companies seem to feel they have the right to govern the way you communicate, how many people your message reaches, and so on. Open source, luckily, doesn't need to own your social life, and so appropriately it's open source developers who are delivering a social network that belongs, first and foremost, to you.

The "Fediverse" (a portmanteau of "federated" and "universe") is a collection of protocols, servers, and users. Together, these form networks that can communicate with one another. Users can exchange short messages, blog-style posts, music, and videos over these networks. Content you post is federated, meaning that once one network is aware of your content, it can pass that content to another network, which passes it to another, and so on.

Most platforms are run by a single company or organization, a single silo where your data is trapped. The only way to share with others is to have them join that service.

Federation allows users of different services to inter-operate with one another without creating an account for each shared resource.

Admins for each service instance can block other instances in case of egregious issues. Users can likewise block users or entire instances to improve their own experience.

Examples of Fediverse platforms

Mastodon is a Fediverse platform that has gotten a lot of attention lately, and it's focused on microblogging (similar to Twitter). Mastodon is only one component of the Fediverse, though. There's much, much more.

  • Microblogging: Mastodon, Pleroma, Misskey
  • Blogging: Write.as, Read.as
  • Video hosting: Peertube
  • Audio hosting: Funkwhale
  • Image hosting: Pixelfed
  • Link aggregator: Lemmy
  • Event planning: mobilizon, gettogether.community
History of the Fediverse

In 2008, Evan Prodromou created a microblogging service called identi.ca using the Ostatus protocol and status.net server software. A few years later, he changed his service to use a new protocol, called pump.io. He released the Ostatus protocol to the Free Software Foundation, where it got incorporated into GNU/social. In this form, the fediverse continued along for several years.

In March 2016, Eugen Rochco (Gargron) created Mastodon, which used GNU/social with an interface similar to a popular Twitter interface called Tweetdeck. This gained some popularity.

Image by:

(Robert Martinez, CC BY-SA)

In 2018, a new protocol called ActivityPub was accepted as a standardized protocol by the W3C. Most Fediverse platforms have adopted it. It was authored by Evan Prodromou, Christine Lemmer-Weber, and others, and it expanded upon the previous services to provide a better and more flexible protocol.

What does the Fediverse look like?

The Fediverse, being made of any application using the ActivityPub protocol, is pretty diverse in appearance. As you might imagine, a microblogging platform has different requirements than a video sharing service.

It can be intimidating to wander into the great unknown, though. Here are some screenshots of my favorite federated services:

The Mastodon web client has a simplified view, as well as the advanced view, the simplified default view shows a single column of the Home feed, with options on the right to view more.

Image by:

(Bob Murphy, CC BY-SA 4.0)

The Advanced Web Interface, shown below, has the home timeline, local timeline, federated timeline, as well as a user's profile. When users first start, the easier one-column view is the default.

Image by:

(Bob Murphy, CC BY-SA 4.0)

Pixelfed has an interface focused around displaying images and videos:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Peertube is for sharing videos:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Mobilizon is an event planning site, with plans for Fediverse integration:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Switch to open source social

Ready to start? Check out fediverse.info for a nice video explanation and a subject-based way to find (self-selected) other users.

Go to fedi.tips for a comprehensive guide on how to get started, how to migrate your data, and more.

Mastodon has several great entry points:

For help deciding which instance to join (assuming you don't want to spin up your own just yet), visit fediverse.party/en/portal/servers.

Are you a data nerd? Visit the-federation.info for stats, monitoring service, and a data-driven look at the known Fediverse.

Get federated

The Fediverse is a way to use the social media in an individualized way, either by choosing an instance with a community that suits your needs, or running your own server, and making it exactly the way you want. It avoids the advertising, algorithms, and other unpleasantries that plague many social networks.

If you are looking for a community that better suits your needs than the big silos, take a look, the Mastodon and the Fediverse may be a good fit for you. Get federated today.

You can find me at @murph@hackers.town on the Fediverse.

A whirlwind tour of all the connected sites that form the world of open source social networks.

Image by:

Opensource.com

Tools Alternatives SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Assess security risks in your open source project with Scorecard

Tue, 03/21/2023 - 15:00
Assess security risks in your open source project with Scorecard snaveen Tue, 03/21/2023 - 03:00

Software supply chain attacks are becoming increasingly common, and attackers are targeting vulnerabilities in dependencies early in the supply chain to amplify the impact of their attacks. Dependency security is very much in the spotlight. It’s important to stay informed about the software projects you rely upon. But when you’re a software developer, you’re likely using a lot of code from lots of different sources. It’s an intimidating prospect to try to keep up with all the code you include in your own project. That’s where the OpenSSF Scorecard comes in.

The OpenSSF’s Scorecard project is an automated tool that assesses a software project’s security practices and risks. According to a recent report by Sonatype, a Scorecard score was one of the best indicators of whether a project had known vulnerabilities. Adopting Scorecard is a great first step to understanding the reliability of the software you use and improving your software supply chain security.

Scorecard is a set of benchmarks that allows you to quickly assess the risk associated with a code project based on best security practices. The aggregated project score, ranging from 0 to 10, provides an indication of how seriously a project appears to take security. This is critical for identifying vulnerable points in your supply chain. A dependency that doesn’t meet your own internal security standards may be the weakest link in your software.

Examining the individual scores for each of the 19 different Scorecard metrics tells you whether a project’s maintainers follow the practices that are most important to you. Does the project require code review when contributors make changes? Are branches protected against unauthorized deletion or changes? Are dependencies pinned, so that compromised version updates cannot be pushed without review? The Scorecard’s granularity in scoring individual best practices is similar to a good restaurant review that answers the question, “do I want to eat here?” Moreover, Scorecard provides project maintainers with a to-do list of actionable steps to improve security.

Open Source Insights

You can use Scorecard to evaluate someone else’s software, or you can use it to improve your own.

To see a project’s score quickly, you can visit Open Source Insights. This site uses Scorecard data to report on the health of dependencies. For anything not covered on Open Source Insights, you can use the Scorecard command-line utility to scan any project on GitHub, or you can run Scorecard locally:

$ scorecard --local . --show-details --format json | jq .

You can run Scorecard on your Git server or on local development machines and trigger it to run with a Git hook.

GitHub Action

If your code is on GitHub, you can add the GitHub Scorecard Action to your repository. The GitHub Action runs a Scorecard scan after any repository change, so you get immediate feedback if a PR causes a regression in your project’s security. The results provide remediation tips and an indication of severity, enabling you to raise your score and secure your project.

Image by:

(Naveen Srinivasan, CC BY-SA 4.0)

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles Scorecard API

The Scorecard API is a powerful tool that allows you to assess the rigor of a large number of open source projects quickly and easily. With this API, you can check the scores of over 1.25 million GitHub repositories that are scanned weekly. The API provides a wealth of information about the security practices of each project, allowing you to quickly identify vulnerabilities and take action to protect your software supply chain. This data can also be used to automate the process of judging software, making it easy to ensure that your software is always secure and up to date. Whether you’re a project owner or a consumer of open source software, the Scorecard API is an essential tool for ensuring the security and reliability of your code.

When you’ve made progress in improving your score, don’t forget to add a badge to showcase your hard work.

Currently, the OpenSSF Scorecard is becoming widely adopted, and as one of its developers, I’m excited about the future. If you try it out, don’t hesitate to contact us through the contact section of the repository and share your feedback.

Join the Scorecard crowd

The Scorecard crowd is growing, and many users are already benefiting from the tool. According to Chris Aniszczyk, CTO of the Cloud Native Computing Foundation, “CNCF uses Scorecards in a variety of its projects to improve security practices across the cloud native ecosystem.”

OpenSSF Scorecard is an automated and practical tool that enables you to assess the security of open source software and take steps to improve your software supply chain security. It’s an essential tool for ensuring that the software you’re using is safe and reliable.

OpenSSF Scorecard helps to ensure your open source software is safe and reliable.

Image by:

Opensource.com

SCaLE Security and privacy What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create accessible websites with Drupal

Mon, 03/20/2023 - 15:00
Create accessible websites with Drupal neerajskydiver Mon, 03/20/2023 - 03:00

As the world becomes increasingly digital, it’s more important than ever to ensure that websites are accessible to everyone. Accessibility is about designing websites that can be used by people with disabilities, such as visual or hearing impairments, as well as those who rely on assistive technology like screen readers. In this article, I’ll explore recommendations for creating accessible websites with Drupal, a popular open source content management system.

Why accessibility is important

First, consider why accessibility is important. According to the World Health Organization, over 1 billion people worldwide live with some form of disability. In the United States alone, 26% of adults have some form of disability. Ensuring that websites are accessible is not only a moral imperative, it’s also a legal requirement. In the US, websites must comply with the Americans with Disabilities Act (ADA) and Section 508 of the Rehabilitation Act, which sets standards for accessibility in federal agencies.

4 tips for creating accessible websites with Drupal

Here are some tips for creating accessible websites with Drupal:

  1. Choose accessible themes and modules: When selecting themes and modules for your Drupal website, it’s important to choose those designed with accessibility in mind. The Drupal community has created a number of themes and modules that are specifically designed for accessibility. You can also use tools like the Web Accessibility Evaluation Tool (WAVE) to test the accessibility of themes and modules before you install them.
  2. Design for keyboard navigation: Many people with disabilities rely on keyboard navigation to access websites. To ensure that your Drupal website can be navigated using a keyboard, you should make sure that all interactive elements are reachable with a keyboard and that the order in which elements are accessed with the keyboard makes sense. You can use the Drupal Accessibility module to test your website’s keyboard navigation.
  3. Use ARIA attributes: Accessible Rich Internet Applications (ARIA) is a set of attributes that can be added to HTML elements to make them more accessible. ARIA attributes can be used to provide additional information to assistive technology, such as screen readers. For example, you can use ARIA attributes to describe the purpose of a button or a link. Drupal has built-in support for ARIA attributes.
  4. Test for accessibility compliance: To ensure that your Drupal website is accessible, test it for compliance with accessibility standards like the Web Content Accessibility Guidelines (WCAG). There are a number of tools available for testing accessibility compliance, such as Accessibility Insights for Web.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Examples of accessible websites using Drupal

Several organizations have successfully implemented accessible websites using Drupal. Here are two of my favorite.

  1. University of Colorado Boulder: The University of Colorado Boulder used Drupal to redesign its website with accessibility in mind. They used Drupal’s built-in accessibility features, as well as custom modules, to ensure that their website is compliant with accessibility standards. As a result, they saw a significant increase in traffic and engagement from users with disabilities.

  2. Connecticut Children’s Medical Center: Connecticut Children’s Medical Center used Drupal to create an accessible website for patients and their families. They used Drupal’s built-in accessibility features, as well as custom modules, to provide features like keyboard navigation and ARIA attributes. The website has been praised for its accessibility and has won several awards.

Access for all

Creating accessible websites is essential for ensuring that everyone can access digital content. Drupal has a number of features and modules that can help make websites more accessible, including built-in accessibility features, themes and modules designed for accessibility, and support for ARIA attributes. By implementing these recommendations, you can create an accessible website that provides a better user experience for all users.

Use the open source Drupal CMS to create accessible websites that provide open access to everyone.

Accessibility Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Develop on Kubernetes with open source tools

Mon, 03/20/2023 - 15:00
Develop on Kubernetes with open source tools rberrelleza Mon, 03/20/2023 - 03:00

Over the last five years, a massive shift in how applications get deployed has occurred. It’s gone from self-hosted infrastructure to the world of the cloud and Kubernetes clusters. This change in deployment practices brought a lot of new things to the world of developers, including containers, cloud provider configuration, container orchestration, and more. There’s been a shift away from coding monoliths towards cloud-native applications consisting of multiple microservices.

While application deployment has advanced, the workflows and tooling for development have largely remained stagnant. They didn’t adapt completely or feel “native” to this brave new world of cloud-native applications. This can mean an unpleasant developer experience, involving a massive loss in developer productivity.

But there’s a better way. What if you could seamlessly integrate Kubernetes and unlimited cloud resources with your favorite local development tools?

The current state of cloud-native development

Imagine that you’re building a cloud-native application that includes a Postgres database in a managed application platform, a data set, and three different microservices.

Normally, this would involve the following steps:

  1. Open a ticket to get your IT team to provision a DB in your corporate AWS account.
  2. Go through documentation to find where to get a copy of last week’s DB dump from your staging environment (you are not using prod data in dev, right?)
  3. Figure out how to install and run service one on your local machine
  4. Figure out how to install and run service two on your local machine
  5. Figure out how to install and run service three on your local machine

And that’s just to get started. Once you’ve made your code changes, you then have to go through these steps to test them in a realistic environment:

  1. Create a Git branch
  2. Commit your changes
  3. Figure out a meaningful commit message
  4. Push your changes
  5. Wait your turn in the CI queue
  6. CI builds your artifacts
  7. CI deploys your application
  8. You finally validate your changes

I’ve worked with teams where this process takes anything from a few minutes to several hours. But as a developer, waiting even a few minutes to see whether my code works was a terrible experience. It was slow, frustrating, and made me dread making complex changes.

Simplify your cloud-native development workflow with Crossplane and Okteto

Crossplane is an open source project that connects your Kubernetes cluster to external, non-Kubernetes resources and allows platform teams to build a custom Kubernetes API to consume those resources. This enables you to do something like kubectl apply -f db.yaml to create a database in any cloud provider. And this enables your DevOps or IT team to give you access to cloud infra without having to create accounts, distribute passwords, or manually limit what you can or can’t do. It's self-service heaven.

The Okteto CLI is an open source tool that enables you to build, develop, and debug cloud native applications directly in any Kubernetes cluster. Instead of writing code, building, and then deploying in Kubernetes to see your changes, you simply run okteto up, and your code changes are synchronized in real time. At the same time, your application is hot-reloaded in the container. It’s a fast inner loop for cloud-native applications.

On their own, each of these tools is very useful, and I recommend you try them both. The Crossplane and Okteto projects enable you to build a great developer experience for you and your team, making building cloud-native applications easier, faster, and joyful.

Here’s the example I mentioned in the previous section, but instead of a traditional setup, imagine you’re using Crossplane and Okteto:

  1. You type okteto up
  2. Okteto deploys your services in Kubernetes while Crossplane provisions your database (and data!)
  3. Okteto synchronizes your code changes and enables hot-reloading in all your services

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

At this point, you have a live environment in Kubernetes, just for you. You saved a ton of time by not having to go through IT, figuring out local dependencies, and remembering the commands needed to run each service. And because everything is defined as code, it means that everyone in your team can get their environment in exactly the same way. No degree in cloud infrastructure required.

But there’s one more thing. Every time you make a code change, Okteto automatically refreshes your services without requiring you to commit code. There’s no waiting for artifacts to build, no redeploying your application, or going through lengthy CI queues. You can write code, save the file, and see your changes running live in Kubernetes in less than a second.

How’s that for a fast cloud-native development experience?

Get into the cloud

If you’re building applications meant to run in Kubernetes, why are you not developing in Kubernetes?

Using Crossplane and Okteto together gives your team a fast cloud-native development workflow. By introducing Crossplane and Okteto into your team:

  • Everyone on your team can spin up a fully-configured environment by running a single command
  • Your cloud development environment spans Kubernetes-based workloads, as well as cloud services
  • Your team can share a single Kubernetes cluster instead of having to spin up one cluster on every developer machine, CI pipeline, and so on
  • Your development environment looks a lot like your production environment
  • You don’t have to train every developer on Kubernetes, containers, cloud providers, and so on.

Just type okteto up, and you’re developing within seconds!

Use Crossplane and Okteto for cloud-native development in a matter of seconds.

Kubernetes Cloud Containers SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I got my first job in tech and helped others do the same

Sat, 03/18/2023 - 15:00
How I got my first job in tech and helped others do the same discombobulateme Sat, 03/18/2023 - 03:00

Two years ago, I got an interview with Sauce Labs when they opened an internship in the Open Source Program Office (OSPO). There was a lot of competition, and I didn’t have the usual technical background you might think a tech company would be looking for. I was stumbling out of a career in the arts, having taken a series of technical courses to learn as much Python and JavaScript as I could manage. And I was determined not to squander the chance I had at an interview working in open source, which had been the gateway for my newfound career path.

It was in the PyLadies of Berlin community that I met Eli Flores, a mentor and friend who ultimately referred me for the interview. I would probably not have had a chance for an interview in Sauce Labs if it hadn’t been for Eli.

My CV was bad.

I was trying to assert technical skills I didn’t have, and trying to emulate what I thought an interviewer for the position would want to read. Of course, the interview selection process is difficult, too. Somebody has to sift through stacks of paper to find someone with the right skills, somebody who fits into the required role, while simultaneously hoping for someone to bring a unique perspective to the organization. On the one hand, you offer a chance to interview, trusting the judgment of someone you trust. On the other hand, you may end up having clones of the people around you.

This is where referral programs shine the most. And this was the story of how I got my first job in tech.

Was a referral enough? Many would consider that they’d done their good deed for the year. But not Eli.

Eli was the first female software engineer to be hired by Sauce Labs in Germany. By the time I arrived, there were three of us: Eli, myself, and Elizabeth, a junior hired one year before. Based on her own struggles, Eli kept an eye on me, invited me for constant check-ins and provided me with practical information about creating my career path based on what the company would consider a check list. She didn’t just share a link and walk away. She explained to me what it meant, and some “traps” that were built in to the system. Leadership, at the time, hadn’t been trained to recognize their biases, and that had affected Eli’s career path.

Besides that, she was the one putting together a formal document explaining to the ones with the power to make decisions why they needed to give me a junior position at the end of my internship. She gathered information among my peers, found out who had hiring power, prepared them months before my contract ended, and gave me the insight I needed to defend my position.

I did my part.

When things looked uncertain about my contract renewal, I asked a friend and mentor what to do, and what was expected. I asked others who’d been in my place recently. I built a document measuring my progress along the months, ensuring that my achievements clearly intersected with the company’s interpretation of the engineering career path. With that, I could demonstrate that Eli was right: They had every reason to keep me, not according to subjective feelings, but with objective metrics.

Defining my role

There was still a big problem, though. Sauce wanted to keep me, but they didn’t know what to do with me. Junior roles require guidance, and the progressive collection of knowledge. I’d found a passion for the Open Source Program Office, where I could actively collaborate with the open source community. But an OSPO may be one of the most complex departments in a company. It gathers open source and business understanding, and it requires autonomy to make connections between business needs and the needs of open source. My peers were mostly staff engineers, contributing to open source projects critical to the business, and those are complex contributions.

One of my peers, Christian Bromann, was also seeking to grow his managerial skills, and so he took me under his wing. We started having regular 1-on-1 sessions, as we discussed what it meant to be doing open source in a business setting. He invited me to get closer to the foundations and projects he was part of, and we did several paired programming sessions to help me understand what mattered most to engineers tasked with meeting specific requirements. He unapologetically placed a chair at the company’s table for me, integrating me into the business, and so my role became clear and defined.

I had help from several colleagues from various other departments to stay and grow as a professional. They each showed me all the other things I didn’t know about the corporate world, including the single most important thing that I didn’t know existed in business: the way we were actually working to make lives better. We had diversity, equity, and inclusion (DEI) groups, environmental, employee resource groups, informal mentorship, and cross department support. The best thing about Sauce Labs is our people. They are incredibly smart and passionate humans, from whom I learn lessons daily.

A short time later, I decided it was time for me to give back.

I looked back and saw all the others that came before me, and helped me land a job I enjoy and that had critically improved my life. I urgently felt the need to bring another chair to this table for someone else. I started digging to find a way to make sense of how a for-profit organization could have a fellowship program.

A fellowship program in a for-profit organization

I was now formally occupying a role that bridged the OSPO and the Community departments. My main task was to create developer relations, focused on open source communities (I know, it’s a dream job!)

The imbalance between contribution and consumption of open source, especially within infrastructure (which business depends upon) is always a risk for the ecosystem. So the question is, what does a company and an open source project have in common?

The answer is humans.

There are many legal issues that makes it hard for a for-profit company to run a fellowship program. This differs from country to country because laws differ from country to country. Germany has a lot of protections in place for workers. As my human resource department told me: “If it smells like a job, it is a job.” That usually means taxes and expenses, and of course costs are always the major determining factor when launching a new program.

Unlike an internship, which implies you are training someone to be hired after the training period and, therefore, requires a pre-approved budget with a year’s salary accounted for. A fellowship, however, is a loose contract, closer to a scholarship, and only spans a specific amount of time. It’s a great fit for an open source project, and similar initiatives like the Google Summer of Code and Outreachy.

The model I was proposing was focused on the humans. I wanted to facilitate entry into the field for aspiring local technologists. I’d gone through similar programs myself, and I knew how frustrating they could be. They’re competitive, and to have a hope of being selected you had to commit to months of unpaid work prior to the application process.

By creating several small local initiatives, I believed the whole open source ecosystem could benefit. I felt that lowering the barriers to entry by not being so competitive, and making the application process easier, would surely bring more people in, especially the ones unable to commit to months of unpaid work.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Fellowship

The Open Source Community Fellowship is a six months paid program that connects for-profit organizations with open source projects to foster diversity in contribution and governance in open source.

Having employees as mentors decreases the cost of the program, and brings a huge value to a company because it helps train employees as better mentors to others. Several studies prove the benefit of having formal and informal mentorship within companies, with rewards including a sense of belonging, and it tends to result in retaining talent. Many companies say their employees are expected to have mentorship skills in order to achieve senior levels, but it’s a skill that needs to be put into practice. By giving employees 2 hours a week to acquire this skill, very little work is lost for a lot of benefit for the long term.

The open source project a business connects with needs to be critical for the business. If you’re going to pay a certain number of people to work for six months exclusively on a project, then there needs to be obvious benefit from that expenditure. I encourage fellowships to be an interdisciplinary program, because most open source projects need help in documentation, translation, design, and community support.

And yes, a fellowship should be six months, no less. Programs that offer only three months, maybe with a stipend, isn’t enough for a proper on-boarding and commitment. The maintainers of tomorrow need to be integrated in the communities of today, and that takes time.

Lastly, yes it has to be a paid program. We need sponsorship, not just mentorship. Although mentorship helps you increase networking, we all have bills to pay. Paying fellows a salary allows them to truly commit to the project.

Sauce Labs is sponsoring for the first time the program that started in December 2022 with five fellows across the USA. We hope this becomes a program that exemplifies the soul of the free software movement, so you can fork it, modify it, and redistribute it.

You’ve got the power

We’re often faced with the question, “What can I do?” Instead of feeling overwhelmed by difficulties that will always exist, acknowledge all the power you have in your current situation. Here are some ideas based on my own story:

  • Become a community organizer. No groups near by? Create your own, and others will follow. Support is needed.
  • Become a mentor. Join initiatives or create a formal or informal program at your company.
  • Pay attention to your colleagues, and proactively offer your help. Even with a steady job, you still need help to grow. Use your privilege to get all the voices heard in your the meetings.
  • Adopt a fellowship program to call your own. It’s a replicable model, easy to implement, and it brings innumerable benefits to the open source ecosystem.

There’s always something we can do to make the world around us a little better, and you are an important piece of that.

I wouldn't be where I am today without my mentors. Now, I have my dream job in open source.

Careers What to read next The power of sisterhood and allyship in open source This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How being open at work results in happy customers

Fri, 03/17/2023 - 15:03
How being open at work results in happy customers katgaines Fri, 03/17/2023 - 03:03

Every interaction we have with another person is influenced by our emotions. Those emotions have ripple effects, especially when it comes to how you do business. A happy customer patronizes your business more, recommends it to family and friends, writes a positive review, and ultimately leads to more money being spent at your business than if they'd been disappointed. The most basic known variable of providing good customer service influences this: If something isn't going as expected, work to make it right (within reason), and you'll save the relationship.

In tech, you can respect this in a few ways. If you listen to customer feedback, create products they'll find useful and intuitive, and nurture those positive associations with your project, then you'll do well. But there's an oft overlooked component to your customer's emotional perception of your business, and that's the customer support team.

Customer support team

The interactions handled by a support team carry a high emotional charge for the customer. Software needs to work, and it needs to work now.

Software faces a unique challenge when it comes to how a customer-facing team builds a relationship: it's primarily a virtual interaction. For in-person customer care, an employee wields the superpower of eye contact, a strong emotional influence. Facial expressions force us to interact with more empathy than say, a voice over the phone, or an email response.

When that's not possible, though, the ability to shift the emotional tone to a calm one can be challenging. It's easy for a customer to have a natural bias toward online support. Maybe they've had a bad experience with heavily automated support in the past. There are plenty of badly configured chatbots, unnavigable phone menus, and dispassionate robotic voices to add fuel to the fire when emotions are already high. A customer may have talked to a support agent who's miserable at work and therefore apathetic to the outcome. The customer carries these experiences into their emotional approach when asking for help. This can create stress for the agent who picks up their ticket, and a vicious cycle repeats.

Because of high stakes, emotional nature of Customer Support (CS), your business has an opportunity. Corral these big emotions through the people who have the most access to them. The key to doing this successfully is to remember the ripple effect. A customer service agent with the necessary tools and knowledge at their fingertips is happy. A happy customer service agent has better conversations with customers. You can set yourself apart from competitors by creating happy customer support agents in an empowered and knowledgeable customer service team. How is this done?

Preparing for success

If you’re a leader in customer support, or a stakeholder elsewhere in the organization (engineering, product, and so on) who works with support a lot, you can work in key areas to make the lives of your support agents a little easier:

Create visibility

As a customer support agent, you need data about the customer you're helping. You need to know the systems your customer is using, and the products you're meant to support. It's crucial to have visibility into other teams in the organization, because they have that kind of data. You need to know who to ask for help when a problem arises, what the known issues are, what's being worked on already, and so on.

Siloed departments are a common major barrier to achieving visibility across teams. This can be made worse by tools and systems that don't connect departments, such as a spreadsheet directory or filing issues in an internal tracking tool. When this is the case, the customer service department can't get timely information on the status of an issue, and the engineering department can't get a good feel for how customers are experiencing the issue.

If your customer service team is given visibility into the complexity faced by your engineering teams, it's easier to clearly articulate the state of issues to customers and stakeholders. Customer service teams can create visibility for engineering, too. Crucial information about problems can come from your customers. When engineering has visibility into customer issues, they're better equipped to prioritize for customer needs.

Everyone works hard to prevent customers from being affected by issues, but that's not always realistic. Use the data your customers give your customer service team about what's wrong, and empower your customer service agents to become part of an incident response process rather than just reacting to it.

Make difficult moments easy

Customer support is a difficult job. If you have never worked in customer service, I'm giving you some homework: shadow your customer support team so you can understand where friction happens. It's a great way to get to know who your customers really are, by seeing them in their highest emotional moments, and seeing how your team navigates that. Customer service means all the questions coming your way, few of the answers at your fingertips, manual tasks to complete, and not enough people to share the load.

Make the job easy for customer service where you can. It will pay off. Maybe you can help the team automate mundane tasks to better focus on more interesting problems. Often this manifests in chatbots, but it's worth being creative here. For example, can automation be applied when escalating tickets to engineering? That could free an agent to work on their troubleshooting process, rather than the manual steps of making that escalation happen.

You can use tooling your engineering team might already have in place to find these opportunities. Operations platforms can be shared to put both team's metrics out in the open, helping everyone stay aligned on common goals.

The feedback loop required for a mature software development life cycle needs the customer service team to operate effectively. You can only do that with shared visibility across your organization.

Making it easy also means proactive design, especially when it comes to processes for critical moments. You probably have a process to manage major incidents. When you share these tools and processes with customer service, you enable greater visibility and gain valuable insight and teammates along the way. During an incident, customer service can play a few key roles:

Aggregating customer reported issues

When an incident triggers, engineering needs to quickly find out how much of the service is impacted, including how many features, the depth to which they are affected, and whether they are slow or completely offline. Customer impact is part of that, which customer service can help uncover by associating inbound customer complaints to technical incidents to help drive priorities. As customer service receives reports of issues during an incident, that data becomes part of the impact of the incident, and is incorporated into the resolution process.

Prioritization of SLA

Your customer service team is in a unique position to help confirm the impact of an incident on the end user. They have insight into when services are reaching their Service Level Agreement (SLA) for certain customers, and can alert the responding team. This is an important piece of information to manage, and engineering teams might not have visibility into those contractual agreements. This aids in the prioritization of issues during incidents. CS can advise on whether or not an incident should be escalated or have its severity increased based on the customer intelligence they are receiving. More customer impact could mean a higher severity level for the incident, more responders included in the triage, and more stakeholders informed.

Liaisons and stakeholder communication

Speaking of stakeholders, customer service can take the lead when it comes to codifying communication practices for incidents. Customer service can take ownership of policies around messaging for customers, template responses, and communication processes. Templates with clear information and status pages to keep up to date are just some of the assets they can manage.

Post-incident follow-up

You'll always encounter customers who watch your status page like a hawk for updates. These customers and others ask customer service for updates if they don't see progress. You can ease the cognitive load of responding to these customers with the newfound connection with the incident process. If you hold incident reviews, then customer service must be part of that conversation. The tone of a conversation changes when a customer service agent has extra data to present to users about the impact of the incident, the resolution, and long-term plans for prevention. Your customer feels consistency, and your agent feels real ownership of the conversation.

At the end of the day, involving your customer service team through the entire process, from start to finish, allows them to gain control of their own destiny. It lets them provide valuable input back into the resolution process, and leverage their improved experience to improve the customer experience.

Automation with open source Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible eBook: 7 examples of automation on the edge More articles about open source automation Invest in people

You can't create a happy employee out of thin air. Customer service leaders need help doing this. People need investment in career growth, the ability to collaborate with their peers, and a voice in the organization to know that their feedback is heard.

Your customer support team is not here to report on metrics to the business or to slog through the queue. Investing means giving them time and space to expand their skills and grow in their careers. For customer service leaders, this comes with knowing you may not keep them in support forever. You can build a strong team that offers phenomenal support, and also creates a hiring funnel into the rest of the business.

The first level of this is up-leveling agents within support. It's common to have a "premium" support team, or similar, for customers who need a high touch level of support, and the ability to get help at any hour. Hiring 24x7 staff won't help a customer service leader redesign their team's status as a cost center, but developing a staffing model to use the existing team's time efficiently can. Sharing tooling with engineering can be one way to get there. For example, if engineering is on call for responding to issues, customer service can use the same tooling to provide a creative solution, rotating a specialized team for those odd hours or high priority issues.

This can open up a new career path for those who want to be on a team with specialized knowledge. Having a team that can be notified as-needed, rather than fully staffed at all times, staring at a queue, and waiting for incoming requests, allows leaders to scale their customer experience efficiently.

Empowering customer service teams to reach out to other teams and advocate for customers also creates new communications channels and opportunities. Your customer service team can serve as a gateway into your organization for technical personnel who are still building skills. A close relationship with engineering supports career growth. Shared processes promote this. So does a shadowing program, having a subject matter expert in support departments for different product areas, and intentionally building career paths to assist transitions when it's time to do so. Customer service agents who transition to other departments bring with them their customer focus and dedication to the customer experience. This is a valuable addition to teams in your organization which increases empathy across the board.

The modern software development life cycle doesn't end when code is checked into a repository and all the tests turn green. A constant feedback loop from users back into development planning links user requirements directly to the product management phase of the cycle. Organizations across various industries have seen the benefits of adopting shared goals and purposes across different teams. Include your customer service team in larger organization-wide initiatives, like DevOps transformations and automation projects. Doing this increases the effectiveness of customer-focused teams, and improving their day-to-day work in turn improves the experience they can provide for customers. In a nutshell: Happy agents translate to happy customers.

The way the teams within your organization interact affects customer experience. Open communication and shared knowledge can transform your business.

Image by:

Opensource.com

SCaLE Careers Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My first pull request at age 14

Fri, 03/17/2023 - 15:00
My first pull request at age 14 neilnaveen Fri, 03/17/2023 - 03:00

My name is Neil Naveen, and I'm a 14-year-old middle schooler who's been coding for seven years. I have also been coding in Golang for two years.

Coding isn't my only passion, though. I've been practicing Jiu-Jitsu for four years and have competed in multiple competitions. I'm passionate about coding and Jiu-Jitsu, as they teach me important life lessons.

Codecombat

I started coding on Codecombat, which taught me many fundamental coding skills.

One of the most exciting moments in my coding journey was when I ranked 16th out of around 50,000 players in a multiplayer arena hosted by Code Combat. I was just 11 years old then, and it was an incredible achievement for me. It gave me the confidence to continue exploring and learning new things.

Leetcode

After Codecombat, I moved on to leetcode.com. This site helped me hone my algorithm coding skills with tailored problems to learn specific algorithms.

Coding Game

When I turned 13, I moved on to bot programming on Coding Game. The competition was much more intense, so I had to use better algorithms. For example, when creating ultimate tic-tac-toe AI, I used algorithms like Minimax and Monte Carlo Tree Search to make my code fast and efficient.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets GitHub CLI

One day, I saw my dad using an open source tool called GitHub CLI, and I was fascinated by it. GitHub CLI is a tool that allows users to interact with the GitHub API directly from the command line without ever having to go to GitHub itself.

Another day, my dad was reviewing PRs from a bot designed to detect vulnerabilities in dependencies.

Later, I thought about GitHub CLI and this bot, and wondered whether GitHub CLI itself was being monitored by a security bot. It turned out that it was not.

So I created a fix and included a security audit for GitHub CLI.

To my delight, my contribution was accepted. It was merged into the project, which was a thrilling moment for me. It was an excellent opportunity to contribute to a significant project like a popular tool like GitHub CLI, and to help secure it. Here's the link to my PR: https://github.com/cli/cli/pull/4473

Commit your code

I hope my story will inspire other young people to explore and contribute to the open source world. Age isn't a barrier to entry. Everyone should explore and contribute. If you want to check out my website, head over to neilnaveen.dev. You can also check out my Leetcode profile. And if you're interested, check out my talk at CloudNativeSecurityCon recording.

I'm grateful for the opportunities I've had so far, and I'm excited to see what the future holds for me. Thank you for reading my story!

Age is not a barrier for contributing to open source.

Image by:

Opensource.com

Careers Programming What to read next 7 steps to securing your Linux server This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Write documentation that actually works for your community

Thu, 03/16/2023 - 15:00
Write documentation that actually works for your community olga-merkulova Thu, 03/16/2023 - 03:00

What distinguishes successful and sustainable projects from those that disappeared into the void? Spoiler — it's community. Community is what drives an open source project, and documentation is one of the foundational blocks for building a community. In other words, documentation isn't only about documentation.

Establishing good documentation can be difficult, though. Users don't read documentation because it's inconvenient, it goes out of date very quickly, there's too much, or there's not enough.

The development team doesn't write documentation because of the "it's obvious for me, so it's obvious to everyone" trap. They don't write because they are too busy making the project exist. Things are developing too fast, or they're not developing fast enough.

But good documentation remains the best communication tool for groups and projects. This is especially true considering that projects tend to get bigger over time.

Documentation can be a single source of truth within a group or a company.  This is important when coordinating people toward a common goal and preserving knowledge as people move on to different projects.

So how do you write appropriate documentation for a project and share it with the right people?

What is successful community documentation?

To succeed in writing documentation in your community:

  • Organize your routine

  • Make it clear and straightforward

  • Be flexible, make changes to the routine according to a specific situation

  • Do version control

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Being flexible doesn't mean being chaotic. Many projects have succeeded just because they are well-organized.

James Clear (author of Atomic Habits) wrote, "You do not rise to the level of your goals. You fall to the level of your systems." Be sure to organize the process so that the level is high enough to achieve success.

Design the process

Documentation is a project. Think of writing docs as writing code. In fact, documentation can be a product and a very useful one at that.

This means you can use the same processes as in software development: analysis, capturing requirements, design, implementation, and maintenance. Make documentation one of your processes.

Think about it from different perspectives while designing the process. Not all documentation is the right documentation for everyone.

Most users only need a high-level overview of a project, while API documentation is probably best reserved for developers or advanced users.

Developers need library and function documentation. Users are better served by example use cases, step-by-step guides, and an architectural overview of how a project fits in with the other software they use.

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Ultimately, before creating any process, you must determine what you need:

  • Focus groups: this includes developers, integrators, administrators, users, sales, operations, executives

  • Level of expertise: Keep in mind the beginner, intermediate, and advanced users

  • Level of detail: There's room for a high-level overview as well as technical detail, so consider how you want these to be presented

  • Journeys and entry points: How do people find the documentation, how they use it

When you ponder these questions, it helps you structure information you want to communicate through documentation. It defines clear metrics on what has to be in the documentation.

Here's how to approach building a process around documentation.

Coding conventions

The code itself should make sense. Documentation should be expressed through good class names, file names, and so on. Create common coding standards and make a self-documented code process by thinking about:

  • Variable naming conventions

  • Make names understandable by using class, function naming schemes

  • Avoid deep nesting, or don't nest at all

  • Do not simply copy-and-paste code

  • No long methods should be used

  • Avoid using magic numbers (use const instead)

  • Use extract methods, variables, and so on

  • Use meaningful directory structures, modules, packages, and files

Testing along with engineering

Testing isn't only about how code should behave. It's also about how to use an API, functions, methods, and so on. Well-written tests can reveal base and edge case scenarios. There's even a test-driven development practice that focuses on creating test cases (step by step scenarios of what should be tested and how) before code development.

Version control

Version control, even for your documentation, helps you track the logic of your changes. It can help you answer why a change was made.

Make sure comments during commits explain WHY a change was made, not WHAT change was made.

The more engaging the documentation process is, the more people will get into it. Add creativity and fun to it. You should think about readability of documentation by using:

  • software code conventions

  • diagrams and graphs (that are also explained in text)

  • mind maps

  • concept maps

  • infographics

  • images (highlight important parts)

  • short videos

By using different ways of communication, you offer more ways to engage with your documentation. This can help forestall misunderstanding (different languages, different meanings), and different learning styles.

Here are some software tools for creating documentation:

  • Javadoc, Doxygen, JsDoc, and so on: Many languages have automated documentation tools to help capture major features in code
  • Web hooks and CI/CD engines: Allows continuous publication of your documentation
  • Restructured Text, Markdown, Asciidoc: File formats and processing engines help you produce beautiful and usable documentation out of plain text files
  • ReadTheDocs:  Is a documentation host that can be attached to a public Git repository
  • Draw.io, LibreOffice Draw, Dia: Produce diagrams, graphs, mind-maps, roadmaps, planning, standards, and metrics
  • Peek, Asciinema: Use commands for recording your terminal
  • VokoscreenNG: Use mouse clicks and screen capture

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Documentation is vital

Documenting processes and protocols are just as important as documenting your project itself. Most importantly, make information about your project and creation of your project exciting.

The speed of entering into a project and process, and understanding how everything works, is an important feature. It helps ensure continued engagement. Simple processes and a clear understanding of what needs to be done is obtained by building one "language" in the team.

Documentation is designed to convey value, which means demonstrating something through words and deeds. It doesn't matter whether it's a member of your team or a user of your application.

Think about the process as a continuum and use means of communication, processes, and documentation.

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Documentation is a means of communication.

Establishing good documentation can be difficult, but it's critical to effective communication. Follow this framework for writing and sharing documentation with the right people.

Image by:

Opensource.com

SCaLE Documentation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I returned to open source after facing grief

Thu, 03/16/2023 - 15:00
How I returned to open source after facing grief Amita Thu, 03/16/2023 - 03:00

The open source community is a wonderful place where people come together to collaborate, share knowledge, and build amazing things. I still remember my first contribution in Fedora 12 years ago, and since then it’s been an amazing journey. However, life can sometimes get in the way and cause us to take a break from participation. The COVID-19 pandemic has affected us all in different ways, and for some, it has been a time of immense loss and grief. I lost my loved one during the pandemic, and it has been the most difficult life event to deal with. It caused me to take a break from the Fedora community, as well. For those in the open source community who have had to take a break due to the loss of a loved one, returning to coding and contributing to projects can feel daunting. However, with some thought and planning, it is possible to make a comeback and once again become an active member of the community.

First and foremost, it is important to take care of yourself and allow yourself the time and space to grieve. Grief is a personal and unique experience. There is no right or wrong way to go through it. It is important to be kind to yourself. Don’t rush into things before you are ready.

Once you’re ready to start contributing again, there are a few things you can do to make your comeback as smooth as possible.

Reach out to other contributors

This is a hard truth: nothing stops for you and technology is growing exponentially. When I rejoined Fedora recently, I felt the world had changed around me so fast. From IRC to Telegram to Signal and Matrix, from IRC meetings to Google Meet, from Pagure to GitLab, from mailing lists to discussion forums, and the list goes on. If you haven’t been active in your community for a while, it can be helpful to reach out to your friends in the community and let them know that you’re back and ready to contribute again. This can help you reconnect with people and get back into the swing of things. They may have some suggestions or opportunities for you to get involved in. I am grateful to my Fedora friend Justin W. Flory, who helped me out selflessly to ensure I found my way back into the community.

Start small

In the past, I served as Fedora Diversity, Equity, & Inclusion (D.E.I.) Advisor, which is one of the Fedora Council member positions. It was a big job. I recognized that, and I knew that were I to think of doing the same job immediately after my break, then it would have been a burden that could threaten to cause early burnout. It’s vitally important to take it easy. Start small.

If you’re feeling overwhelmed by the thought of diving back into a big project, start small. There are plenty of small tasks and bugs that need to be fixed, and tackling one of these can help you ease back into the community.

Find a mentor

If you’re feeling unsure about how to get started or where to focus your efforts, consider finding a mentor. A mentor (in my case, Justin W. Flory) can provide guidance, advice, and support as you make your comeback.

Show gratitude

An open source community is built on the contributions of many people. A healthy community is grateful for your contribution. Showing gratitude is part of making a community healthy. Show your gratitude to others who help you, guide you, and give you feedback.

Block your calendar

Initially, it may take some time to get back to the rhythm of contributing. It helps to schedule some time in your calendar for open source work. It can be weekly/bi-weekly, depending on your availability. Remember, every contribution counts, and that is the beauty of the open source world. This trick will help you to get into a regular routine.

Two steps forward, one step back

Finally, it’s important to remember that it’s okay to take a step back if you need it. Grief is not a linear process. You may find that you need to take a break again in the future. It’s important to be honest with yourself and others about your needs. Take the time you need to take care of yourself.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Return on your own terms

Returning to the open source community after a period of grief can be challenging. It’s also an opportunity to reconnect with something you are passionate about and make a positive impact in the world. In time, you’ll find that you’re able to pick up where you left off, and re-engage with the community once again.

I dedicate this, my first ever Opensource.com article, to my late younger brother Mr. Nalin Sharma, who left us at the age of 32 due to COVID-19 in 2021. He was a passionate engineer and full of life. I hope he is in a better place now, and I am sure he will always be alive in my memories.

Contributing to open source projects after losing a loved one can feel daunting. Here's my advice for how to rejoin the community.

Image by:

Opensource.com

Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to set up your own open source DNS server

Wed, 03/15/2023 - 15:00
How to set up your own open source DNS server Amar1723 Wed, 03/15/2023 - 03:00

A Domain Name Server (DNS) associates a domain name (like example.com) with an IP address (like 93.184.216.34). This is how your web browser knows where in the world to look for data when you enter a URL or when a search engine returns a URL for you to visit. DNS is a great convenience for internet users, but it's not without drawbacks. For instance, paid advertisements appear on web pages because your browser naturally uses DNS to resolve where those ads "live" on the internet. Similarly, software that tracks your movement online is often enabled by services resolved over DNS. You don't want to turn off DNS entirely because it's very useful. But you can run your own DNS service so you have more control over how it's used.

I believe it's vital that you run your own DNS server so you can block advertisements and keep your browsing private, away from providers attempting to analyze your online interactions. I've used Pi-hole in the past and still recommend it today. However, lately, I've been running the open source project Adguard Home on my network. I found that it has some unique features worth exploring.

Adguard Home

Of the open source DNS options I've used, Adguard Home is the easiest to set up and maintain. You get many DNS resolution solutions, such as DNS over TLS, DNS over HTTPS, and DNS over QUIC, within one single project.

You can set up Adguard as a container or as a native service using a single script:

$ curl -s -S -L \ https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh

Look at the script so you understand what it does. Once you're comfortable with the install process, run it:

$ sh ./install.sh

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

Some of my favorite features of AdGuard Home:

  • An easy admin interface

  • Block ads and malware with the Adguard block list

  • Options to configure each device on your network individually

  • Force safe search on specific devices

  • Set HTTPS for the admin interface, so your remote interacts with it are fully encrypted

I find that Adguard Home saves me time. Its block lists are more robust than those on Pi-hole. You can quickly and easily configure it to run DNS over HTTPS.

No more malware

Malware is unwanted content on your computer. It's not always directly dangerous to you, but it may enable dangerous activity for third parties. That's not what the internet was ever meant to do. I believe you should host your own DNS service to keep your internet history private and out of the hands of known trackers such as Microsoft, Google, and Amazon. Try Adguard Home on your network.

Take control of your internet privacy by running your own DNS server with the open source project, Adguard Home.

Image by:

Opensource.com

Networking Internet What to read next 5 open source tools to take control of your own data This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Synchronize databases more easily with open source tools

Wed, 03/15/2023 - 15:00
Synchronize databases more easily with open source tools Li Zongwen Wed, 03/15/2023 - 03:00

Change Data Capture (CDC) uses Server Agents to record, insert, update, and delete activity applied to database tables. CDC provides details on changes in an easy-to-use relational format. It captures column information and metadata needed to apply the changes to the target environment for modified rows. A changing table that mirrors the column structure of the tracked source table stores this information.

Capturing change data is no easy feat. However, the open source Apache SeaTunnel project i is a data integration platform provides CDC function with a design philosophy and feature set that makes these captures possible, with features above and beyond existing solutions.

CDC usage scenarios

Classic use cases for CDC is data synchronization or backups between heterogeneous databases. You may synchronize data between MySQL, PostgreSQL, MariaDB, and similar databases in one scenario. You could synchronize the data to a full-text search engine in a different example. With CDC, you can create backups of data based on what CDC has captured.

When designed well, the data analysis system obtains data for processing by subscribing to changes in the target data tables. There's no need to embed the analysis process into the existing system.

Sharing data state between microservices

Microservices are popular, but sharing information between them is often complicated. CDC is a possible solution. Microservices can use CDC to obtain changes in other microservice databases, acquire data status updates, and execute the corresponding logic.

Update cache

The concept of Command Query Responsibility Segregation (CQRS) is the separation of command activity from query activity. The two are fundamentally different:

  • A command writes data to a data source.
  • A query reads data from a data source.

The problem is, when does a read event happen in relation to when a write event happened, and what bears the burden of making those events occur?

It can be difficult to update a cache. You can use CDC to obtain data update events from a database and let that control the refresh or invalidation of the cache.

CQRS design usually uses two different storage instances to support business query and change operations. Because of the use of two stores, in order to ensure data consistency, we can use distributed transactions to ensure strong data consistency, at the cost of availability, performance, and scalability. You can also use CDC to ensure final consistency of data, which has better performance and scalability, but at the cost of data latency, which can currently be kept in the range of millisecond in the industry.

For example, you could use CDC to synchronize MySQL data to your full-text search engine, such as ElasticSearch. In this architecture, ElasticSearch searches all queries, but when you want to modify data, you don't directly change ElasticSearch. Instead, you modify the upstream MySQL data, which generates a data update event. This event is consumed by the ElasticSearch system as it monitors the database, and the event prompts an update within ElasticSearch.

In some CQRS systems, a similar method can be used to update the query view.

Pain points

CDC isn't a new concept and various existing projects implement it. For many users, though, there are some disadvantages to the existing solutions.

Single table configuration

With some CDC software, you must configure each table separately. For example, to synchronize ten tables, you need to write ten source SQLs and Sink SQLs. To perform a transform, you also need to write the transform SQL.

Sometimes, a table can be written by hand, but only when the volume is small. When the volume is large, type mapping or parameter configuration errors may occur, resulting in high operation and maintenance costs.

Apache SeaTunnel is an easy-to-use data integration platform hoping to solve this problem.

Schema evolution is not supported

Some CDC solutions support DDL event sending but do not support sending to Sink so that it can make synchronous changes. Even a CDC that can get an event may not be able to send it to the engine because it cannot change the Type information of the transform based on the DDL event (so the Sink cannot follow the DDL event to change it).

Too many links

On some CDC platforms, when there are several tables, a link must be used to represent each table while one is synchronized. When there are many tables, a lot of links are required. This puts pressure on the source JDBC database and causes too many Binlogs, which may result in repeated log parsing.

SeaTunnel CDC architecture goals

Apache SeaTunnel is an open source high-performance, distributed, and massive data integration framework. To tackle the problems the existing data integration tool's CDC functions cannot solve, the community "reinvents the wheel" to develop a CDC platform with unique features. This architectural design is based on the strengths and weaknesses of existing CDC tools.

Apache Seatunnel supports:

  • Lock-free parallel snapshot history data.
  • Log heartbeat detection and dynamic table addition.
  • Sub-database, sub-table, and multi-structure table reading.
  • Schema evolution.
  • All the basic CDC functions.

The Apache SeaTunnel reduces the operations and maintenance costs for users and can dynamically add tables.

For example, when you want to synchronize the entire database and add a new table later, you don't need to maintain it manually, change the job configuration, or stop and restart jobs.

Additionally, Apache SeaTunnel supports reading sub-databases, sub-tables, and multi-structure tables in parallel. It also allows schema evolution, DDL transmission, and changes supporting schema evolution in the engine, which can be changed to Transform and Sink.

SeaTunnel CDC current status

Currently, CDC has the basic capabilities to support incremental and snapshot phases. It also supports MySQL for real-time and offline use. The MySQL real-time test is complete, and the offline test is coming. The schema is not supported yet because it involves changes to Transform and Sink. The dynamic discovery of new tables is not yet supported, and some interfaces have been reserved for multi-structure tables.

Open source and data science What is data science? What is Python? Data scientist: A day in the life Try OpenShift Data Science MariaDB and MySQL cheat sheet Latest data science articles Project outlook

As an Apache incubation project, the Apache SeaTunnel community is developing rapidly. The next community planning session has these main directions:

1. Expand and improve connector and catalog ecology

We're working to enhance many connector and catalog features, including:

  • Support more connectors, including TiDB, Doris, and Stripe.
  • Improving existing connectors in terms of usability and performance.
  • Support CDC connectors for real-time, incremental synchronization scenarios.

Anyone interested in connectors can review Umbrella.

2. Support for more data integration scenarios (SeaTunnel Engine)

There are pain points that existing engines cannot solve, such as the synchronization of an entire database, the synchronization of table structure changes, and the large granularity of task failure.

We're working to solve those issues. Anyone interested in the CDC engine should look at issue 2272.

3. Easier to use (SeaTunnel Web)

We're working to provide a web interface to make operations easier and more intuitive. Through a web interface, we will make it possible to display Catalog, Connector, Job, and related information, in the form of DAG/SQL. We're also giving users access to the scheduling platform to easily tackle task management.

Visit the web sub-project for more information on the web UI.

Wrap up

Database activity often must be carefully tracked to manage changes based on activities such as record updates, deletions, or insertions. Change Data Capture provides this capability. Apache SeaTunnel is an open source solution that addresses these needs and continues to evolve to offer more features. The project and community are active and your participation is welcome.

The open source Apache SeaTunnel project is a data integration platform that makes it easy to synchronize data.

Image by:

Jason Baker. CC BY-SA 4.0.

SCaLE Data Science Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 of the most curious uses of the Raspberry Pi

Tue, 03/14/2023 - 15:00
5 of the most curious uses of the Raspberry Pi AmyJune Tue, 03/14/2023 - 03:00

Recently, I was on a call where it was said that the open source community is a combination of curiosity and a culture of solutions. And curiosity is the basis of our problem-solving. We use a lot of open source when solving problems of all sizes, and that includes Linux running on the supremely convenient Raspberry Pi.

We all have such different lived experiences, so I asked our community of writers about the most curious use of a Raspberry Pi they've ever encountered. I have a hunch that some of these fantastic builds will spark an idea for others.

Experimentation with the Raspberry Pi

For me, the Raspberry Pi has been a great tool to add extra development resources on my home network. If I want to create a new website or experiment with a new software tool, I don't have to bog down my desktop Linux machine with a bunch of packages that I might only use once while experimenting. Instead, I set it up on my Raspberry Pi.

If I think I'm going to do something risky, I use a backup boot environment. I have two microSD cards, which allows me to have one plugged into the Raspberry Pi while I set up the second microSD to do whatever experimenting I want to do. The extra microSD doesn't cost that much, but it saves a ton of time for the times when I want to experiment on a second image. Just shutdown, swap microSD cards, reboot, and immediately I'm working on a dedicated test system.

When I'm not experimenting, my Raspberry Pi acts as a print server to put my non-WiFi printer on our home network. It is also a handy file server over SSH so that I can make quick backups of my important files.

Jim Hall

The popularity of the Raspberry Pi

The most amazing thing I've seen about the Raspberry Pi is that it normalized and commoditized the idea of the small-board computers and made them genuinely and practically available to folks.

Before the Raspberry Pi, we had small-board computers in a similar fashion, but they tended to be niche, expensive, and nigh unapproachable from a software perspective. The Raspberry Pi was cheap, and cheap to the point of making it trivial for anyone to get one for a project (ignoring the current round of unobtainium it's been going through). Once it was cheap, people worked around the software challenges and made it good enough to solve many basic computing tasks, down to being able to dedicate a full and real computer to a task, not just a microcontroller.

We've got a plethora of good, cheap-ish, small-board computers, and this gives way to tinkering, toying, and experimenting. People are willing to try new ideas, even spurring more hobbyist hardware development to support these ideas.

Honestly, that is by far the most amazing and radical thing I've seen from the Raspberry Pi: how it's fundamentally changed everyone's perception of what computing, at the level of what the Raspberry Pi excels at anyway, is and given rise not only to its own ecosystem but now countless others in diversity.

John ‘Warthog9' Hawley

Raspberry Pi for the bees

In 2018, my younger brother and I used to have several beehives and used a Raspberry Pi and various sensors to monitor the temperature and humidity of our hives. We also planned to implement a hive scale to observe honey production in summer and measure the weight in winter to see if the bees had enough food left. We never got around to doing that.

Our little monitoring solution was based on a Raspberry Pi 2 Model B, ran Raspbian Stretch (based on Debian 9), and had a temperature and humidity sensor connected (DHT11). We had three or four of those sensors in the hives to measure the temperature at the entrance hole, under the lid, and in the lowest frame. We connected the sensor directly to the Pi and used the Python_DHT sensor library to read the data. We also set up InfluxDB, Telegraf, and finally, Grafana to visualize the data.

If you want to know more about our setup, we published an article on our little monitoring solution in Linux Magazine.

Heike Jurzik

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Go retro with the Raspberry Pi

One thing I would love to create with the Raspberry Pi is a simulation of how to program machine language into an old-style computer using "switches and lights." This looks to be fairly straightforward using the GPIO pins on the Raspberry Pi. For example, their online manual shows examples of how to use GPIO to switch an LED on and off or to use buttons to get input. I think it should be possible with some LEDs and switches, plus a small program running on the Raspberry Pi to emulate the old-style computer. But I lack the free time to work on a project like this, which is why I wrote the Toy CPU to emulate it.

Jim Hall

Build a toy with the Raspberry Pi

When my daughter was four, she asked for a "Trolls music box" for Christmas. She could picture it perfectly in her head. It would be pink and sparkly with her name on it. When she opened the box, the theme song from the popular movie would play. She could store her trolls and other treasures in the box. After searching everywhere online and in stores, I could not find one that measured up to her imagination. My husband and I decided we could build one ourselves in our own toyshop (i.e., his home office). The center of it all was, of course, the Raspberry Pi. He used light sensors and a Python script to make the song play at just the right moment. We placed the tech discreetly in the bottom of the music box and decorated it with her aesthetic in mind. That year, holiday magic was made possible with open source! 

Lauren Pritchett

People use the Raspberry Pi for all kinds of things. What's caught your attention?

Image by:

Dwight Sipler on Flickr

Raspberry Pi Opensource.com community What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 5566 points Minnesota

Jim Hall is an open source software advocate and developer, best known for usability testing in GNOME and as the founder + project coordinator of FreeDOS. At work, Jim is CEO of Hallmentum, an IT executive consulting company that provides hands-on IT Leadership training, workshops, and coaching.

| Connect jimfhall User Attributes Correspondent Open Sourcerer People's Choice Award People's Choice Award 2018 Author Correspondent Contributor Club 289 points Cologne/Luebeck, Germany

Heike is a FLOSS enthusiast, technical writer and author of several Linux books:

www.heikejurzik.de

www.yuki-likes-snow.de

Heike discovered Linux in 1996, while she was working at the University's Center for Applied Computer Science. In her spare time Heike hangs out at Folk and Bluegrass sessions, playing the fiddle.

| Follow heikejurzik | Connect heike-jurzik Open Minded Author Linux Debian Geek Contributor Club 96 points Oregon

John works for VMware in the Open Source Program Office on upstream open source projects. In a previous life he's worked on the MinnowBoard open source hardware project, led the system administration team on kernel.org, and built desktop clusters before they were cool. For fun he's built multiple star ship bridges, a replica of K-9 from a popular British TV show, done in flight computer vision processing from UAVs, designed and built a pile of his own hardware.

He's cooked delicious meals for friends, and is a connoisseur of campy 'bad' movies. He's a Perl programmer who's been maliciously accused of being a Python developer as well.

| Follow warty9 Open Enthusiast SysAdmin CentOS Community Manager Developer Fedora Geek Author DevOps Gamer Linux Maker Open hardware Python 2208 points Raleigh, NC

Lauren is the managing editor for Opensource.com. When she's not organizing the editorial calendar or digging into the data, she can be found going on adventures with her family and German shepherd rescue dog, Quailford. She is passionate about spreading awareness of how open source technology and principles can be applied to areas outside the tech industry such as education and government.

User Attributes Team Open Source Champion Author Open access Contributor Club Register or Login to post a comment.

Calculate pi by counting pixels

Tue, 03/14/2023 - 15:00
Calculate pi by counting pixels Jim Hall Tue, 03/14/2023 - 03:00

For Pi Day this year, I wanted to write a program to calculate pi by drawing a circle in FreeDOS graphics mode, then counting pixels to estimate the circumference. I naively assumed that this would give me an approximation of pi. I didn't expect to get 3.14, but I thought the value would be somewhat close to 3.0.

I was wrong. Estimating the circumference of a circle by counting the pixels required to draw it will give you the wrong result. No matter what resolution I tried, the final pi calculation of circumference divided by diameter was always around 2.8.

You can't count pixels to calculate pi

I wrote a FreeDOS program using OpenWatcom C that draws a circle to the screen, then counts the pixels that make up that circle. I wrote it in FreeDOS because DOS programs can easily enter graphics mode by using the OpenWatcom _setvideomode function. The _VRES16COLOR video mode puts the display into 640×680 resolution at 16 colors, a common "classic VGA" screen resolution. In the standard 16 color DOS palette, color 0 is black, color 1 is blue, color 7 is a low intensity white, and color 15 is a high intensity white.

In graphics mode, you can use the _ellipse function to draw an ellipse to the screen, from some starting x,y coordinate in the upper left to a final x,y coordinate in the lower right. If the height and width are the same, the ellipse is a circle. Note that in graphics mode, x and y count from zero, so the upper left corner is always 0,0.

Image by:

(Jim Hall, CC BY-SA 4.0)

You can use the _getpixel function to get the color of a pixel at a specified x,y coordinate on the screen. To show the progress in my program, I also used the _setpixel function to paint a single pixel at any x,y on the screen. When the program found a pixel that defined the circle, I changed that pixel to bright white. For other pixels, I set the color to blue.

Image by:

(Jim Hall, CC BY-SA 4.0)

With these graphics functions, you can write a program that draws a circle to the screen, then iterates over all the x,y coordinates of the circle to count the pixels. For any pixel that is color 7 (the color of the circle), add one to the pixel count. At the end, you can use the total pixel count as an estimate of the circumference:

#include #include int main() { unsigned long count; int x, y; /* draw a circle */ _setvideomode(_VRES16COLOR); /* 640x480 */ _setcolor(7); /* white */ _ellipse(_GBORDER, 0, 0, 479, 479); /* count pixels */ count = 0; for (x = 0; x <= 479; x++) { for (y = 0; y <= 479; y++) { if (_getpixel(x, y) == 7) { count++; /* highlight the pixel */ _setcolor(15); /* br white */ _setpixel(x, y); } else { /* highlight the pixel */ _setcolor(1); /* blue */ _setpixel(x, y); } } } /* done */ _setvideomode(_DEFAULTMODE); printf("pixel count (circumference?) = %lu\n", count); puts("diameter = 480"); printf("pi = c/d = %f\n", (double) count / 480.0); return 0; }

But counting pixels to determine the circumference underestimates the actual circumference of the circle. Because pi is the ratio of the circumference of a circle to its diameter, my pi calculation was noticeably lower than 3.14. I tried several video resolutions, and I always got a final result of about 2.8:

pixel count (circumference?) = 1356 diameter = 480 pi = c/d = 2.825000

Open science and sustainability Video series: ChRIS (ChRIS Research Integration System) Explore Red Hat Research projects 6 articles to inspire open source sustainability How Linux rescues slow computers (and the planet) Latest articles about open science Latest articles about open education Latest articles about sustainability You need to measure the distance between pixels to get pi

The problem with counting pixels to estimate the circumference is that the pixels are only a sample of a circular drawing. Pixels are discrete points in a grid, while a circle is a continuous drawing. To provide a better estimate of the circumference, you must measure the distance between pixels and use that total measurement for the circumference.

To update the program, you must write a function that calculates the distance between any two pixels: x0,y0 and x,y. You don't need a bunch of fancy math or algorithms here, just the knowledge that the OpenWatcom _ellipse function draws only solid pixels in the color you set for the circle. The function doesn't attempt to provide antialiasing by drawing nearby pixels in some intermediate color. That allows you to simplify the math. In a circle, pixels are always directly adjacent to one another: vertically, horizontally, or diagonally.

For pixels that are vertically or horizontally adjacent, the pixel "distance" is simple. It's a distance of 1.

For pixels that are diagonally adjacent, you can use the Pythagorean theorem of a²+b²=c² to calculate the distance between two diagonal pixels as the square root of 2, or approximately 1.414.

double pixel_dist(int x0, int y0, int x, int y) { if (((x - x0) == 0) && ((y0 - y) == 1)) { return 1.0; } if (((y0 - y) == 0) && ((x - x0) == 1)) { return 1.0; } /* if ( ((y0-y)==1) && ((x-x0)==1) ) { */ return 1.414; /* } */ }

I wrapped the last "if" statement in comments so you can see what the condition is supposed to represent.

To measure the circumference, we don't need to examine the entire circle. We can save a little time and effort by working on only the upper left quadrant. This also allows us to know the starting coordinate of the first pixel in the circle; we'll skip the first pixel at 0,239 and instead assume that as our first x0,y0 coordinate in measuring the quarter-circumference.

Image by:

(Jim Hall, CC BY-SA 4.0)

The final program is similar to our "count the pixels" program, but instead measures the tiny distances between pixels in the upper left quadrant of the circle. You may notice that the program counts down the y coordinates, from 238 to 0. This accommodates the assumption that the known starting x0,y0 coordinate in the quarter-circle is 0,239. With that assumption, the program only needs to evaluate the y coordinates between 0 and 238. To estimate the total circumference of the circle, multiply the quarter-measurement by 4:

#include #include double pixel_dist(int x0, int y0, int x, int y) { ... } int main() { double circum; int x, y; int x0, y0; /* draw a circle */ _setvideomode(_VRES16COLOR); /* 640x480 */ _setcolor(7); /* white */ _ellipse(_GBORDER, 0, 0, 479, 479); /* calculate circumference, use upper left quadrant only */ circum = 0.0; x0 = 0; y0 = 479 / 2; for (x = 0; x <= 479 / 2; x++) { for (y = (479 / 2) - 1; y >= 0; y--) { if (_getpixel(x, y) == 7) { circum += pixel_dist(x0, y0, x, y); x0 = x; y0 = y; /* highlight the pixel */ _setcolor(15); /* br white */ _setpixel(x, y); } else { /* highlight the pixel */ _setcolor(1); /* blue */ _setpixel(x, y); } } } circum *= 4.0; /* done */ _setvideomode(_DEFAULTMODE); printf("circumference = %f\n", circum); puts("diameter = 480"); printf("pi = c/d = %f\n", circum / 480.0); return 0; }

This provides a better estimate of the circumference. It's still off by a bit, because measuring a circle using pixels is still a pretty rough approximation, but the final pi calculation is much closer to the expected value of 3.14:

circumference = 1583.840000 diameter = 480 pi = c/d = 3.299667

Happy Pi Day! Does counting pixels get you the circumference of a circle?

Image by:

Opensource.com

Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I destroyed my Raspberry Pi

Tue, 03/14/2023 - 15:00
How I destroyed my Raspberry Pi hANSIc99 Tue, 03/14/2023 - 03:00

I wanted to write an article demonstrating "How to automate XYZ with the Raspberry Pi" or some other interesting, curious, or useful application around the Raspberry Pi. As you might realize from the title, I cannot offer such an article anymore because I destroyed my beloved Raspberry Pi.

The Raspberry Pi is a standard device on every technology enthusiast's desk. As a result, tons of tutorials and articles tell you what you can do with it. This article instead covers the dark side: I describe what you had better not do!

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Cable colors

I want to provide some background before I get to the actual point of destruction. You have to deal with different cable colors when doing electrical work in and around the house. Here in Germany, each house connects to the three-phase AC supply grid, and you usually find the following cable colors:

  • Neutral conductor: Blue
  • (PE) Protective conductor: Yellow-green
  • (L1) Phase 1: Brown
  • (L2) Phase 2: Black
  • (L3) Phase 3: Grey

For example, when wiring a lamp, you pick up neutral (N, blue) and phase (L, 1/3 chance that it is brown), and you get 230V AC between them.

Wiring the Raspberry Pi

Earlier this year, I wrote an article about OpenWrt, an open source alternative to firmware for home routers. In the article, I used a TP-link router device. However, the original plan was to use my Raspberry Pi model 4.

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

The idea was to build a travel router that I could install in my caravan to improve the internet connectivity at a campsite (I'm the kind of camper who can't do without the internet). To do so, I added a separate USB-Wifi-dongle to my Raspberry Pi to connect a second Wifi antenna and installed OpenWrt. Additionally, I added a 12V-to-5V DC/DC converter to connect with the 12V wiring in the caravan. I tested this setup with a 12V vehicle battery on my desk, and it worked as expected. After everything was set up and configured, I started to install it in my caravan.

In my caravan, I found a blue and a brown wire, connected it with the 12V-to-5V DC/DC converter, put the fuses back in, and…

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

The chip, which disassembled itself, is the actual step-down transformer. I was so confident that the blue wire was on 0V potential and the brown one was on 12V that I didn't even measure. I have since learned that the blue cable is on 12V, and the brown cable is on ground potential (which is pretty common in vehicle electronics).

Wrap up

Since this accident, my Raspberry Pi has never booted up. Because the prices for the Raspberry Pi have skyrocketed, I had to find an alternative. Luckily, I came across the TP-Link travel router, which can also run Open-WRT and does its job satisfactorily. In closing: It's better to measure too often than one time too few.

It's better to measure too often than one time too few. I learned the hard way, so you don't have to.

Image by:

kris krüg

Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Control your Raspberry Pi with Lua

Tue, 03/14/2023 - 15:00
Control your Raspberry Pi with Lua alansmithee Tue, 03/14/2023 - 03:00

Lua is a sometimes misunderstood language. It’s different from other languages, like Python, but it’s a versatile extension language that’s widely used in game engines, frameworks, and more. Overall, I find Lua to be a valuable tool for developers, letting them enhance and expand their projects in some powerful ways.

You can download and run stock Lua as Seth Kenlon explained in his article Is Lua worth learning, which includes simple Lua code examples. However, to get the most out of Lua, it’s best to use it with a framework that has already adopted the language. In this tutorial, I demonstrate how to use a framework called Mako Server, which is designed for enabling Lua programmers to easily code IoT and web applications. I also show you how to extend this framework with an API for working with the Raspberry Pi’s GPIO pins.

Requirements

Before following this tutorial, you need a running Raspberry Pi that you can log into. While I will be compiling C code in this tutorial, you do not need any prior experience with C code. However, you do need some experience with a POSIX terminal.

Install

To start, open a terminal window on your Raspberry Pi and install the following tools for downloading code using Git and for compiling C code:

$ sudo apt install git unzip gcc make

Next, compile the open source Mako Server code and the Lua-periphery library (the Raspberry Pi GPIO library) by running the following command:

$ wget -O Mako-Server-Build.sh https://raw.githubusercontent.com/RealTimeLogic/BAS/main/LinuxBuild.sh \

Review the script to see what it does, and run it once you’re comfortable with it:

$ sh ./Mako-Server-Build.sh

The compilation process may take some time, especially on an older Raspberry Pi. Once the compilation is complete, the script asks you to install the Mako Server and the lua-periphery module to /usr/local/bin/. I recommend installing it to simplify using the software. Don’t worry, if you no longer need it, you can uninstall it:

$ cd /usr/local/bin/ $ sudo rm mako mako.zip periphery.so

To test the installation, type mako into your terminal. This starts the Mako Server, and see some output in your terminal. You can stop the server by pressing CTRL+C.

IoT and Lua

Now that the Mako Server is set up on your Raspberry Pi, you can start programming IoT and web applications and working with the Raspberry Pi’s GPIO pins using Lua. The Mako Server framework provides a powerful and easy API for Lua developers to create IoT applications and the lua-periphery module lets Lua developers interact with the Raspberry Pi’s GPIO pins and other peripheral devices.

Start by creating an application directory and a .preload script, which inserts Lua code for testing the GPIO. The .preload script is a Mako Server extension that’s loaded and run as a Lua script when an application is started.

$ mkdir gpiotst $ nano gpiotst/.preload

Copy the following into the Nano editor and save the file:

-- Load periphery.so and access the LED interface local LED = require('periphery').LED local function doled() local led = LED("led0") -- Open LED led0 trace"Turn LED on" led:write(true) -- Turn on LED (set max brightness) ba.sleep(3000) -- 3 seconds trace"Turn LED off" led:write(false) -- Turn off LED (set zero brightness) led:close() end ba.thread.run(doled) -- Defer execution -- to after Mako has started

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

The above Lua code controls the main Raspberry Pi LED using the Lua-periphery library you compiled and included with the Mako Server. The script defines a single function called doled that controls the LED. The script begins by loading the periphery library (the shared library periphery.so) using the Lua require function. The returned data is a Lua table with all GPIO API functions. However, you only need the LED API, and you directly access that by appending .LED after calling require. Next, the code defines a function called doled that does the following:

  1. Opens the Raspberry Pi main LED identified as led0 by calling the LED function from the periphery library and by passing it the string led0.
  2. Prints the message Turn LED on to the trace (the console).
  3. Activates the LED by calling the write method on the LED object and passing it the Boolean value true, which sets the maximum brightness of the LED.
  4. Waits for 3 seconds by calling ba.sleep(3000).
  5. Prints the message Turn LED off to the trace.
  6. Deactivates the LED by calling the write method on the LED object and passing it the Boolean value false, which sets zero brightness of the LED.
  7. Closes the LED by calling the close function on the LED object.

At the end of the .preload script, the doled function is passed in as argument to function ba.thread.run. This allows the execution of the doled function to be deferred until after Mako Server has started.

To start the gpiotst application, run the Mako Server as follows:

$ mako -l::gpiotst

The following text is printed in the console:

Opening LED: opening 'brightness': Permission denied.

Accessing GPIO requires root access, so stop the server by pressing CTRL+C and restart the Mako Server as follows:

$ sudo mako -l::gpiotst

Now the Raspberry Pi LED turns on for 3 seconds. Success!

Lua unlocks IoT

In this primer, you learned how to compile the Mako Server, including the GPIO Lua module, and how to write a basic Lua script for turning the Raspberry Pi LED on and off. I’ll cover further IoT functions, building upon this article, in future articles.

You may in the meantime delve deeper into the Lua-periphery GPIO library by reading its documentation to understand more about its functions and how to use it with different peripherals. To get the most out of this tutorial, consider following the interactive Mako Server Lua tutorial to get a better understanding of Lua, web, and IoT. Happy coding!

Learn how to use the Lua programming language to program Internet of Things (IoT) devices and interact with General Purpose Input/Output (GPIO) pins on a Raspberry Pi.

Image by:

Dwight Sipler on Flickr

Raspberry Pi Programming Internet of Things (IoT) Download the eBook A guide to Lua This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 questions for the OSI board candidates

Mon, 03/13/2023 - 15:00
7 questions for the OSI board candidates Luis Mon, 03/13/2023 - 03:00

The Open Source Initiative (OSI) is a non-profit organization that promotes open source that maintains and evaluates compliance with the Open Source Definition. Every year the OSI holds elections for its board of directors. It has become somewhat of a tradition for me to write questions for OSI board candidates.

In past years, I've asked questions about the focus of the organization and how the board should work with staff. The board has since acted decisively by hiring its first executive director, Stefano Maffuli. It has also expanded staffing in other ways, like hiring a policy director. To me, this is a huge success, and so I didn't pose those questions again this year. 

Repeated questions

Other prior questions are worth repeating. In particular:

Your time: "You have 24 hours in the day and could do many different things. Why do you want to give some of those hours to OSI? What do you expect your focus to be during those hours?"

This question is a good one to ask of applicants to any non-profit board. Board work is often boring, thankless, and high-stakes. Anyone going into it needs to have not just a reason why but also a clear, specific idea of what they're going to do. "Help out" is not enough—what do you conceive of as the role of the board? How will you help execute the board's fiduciary duties around finances and executive oversight etc.?

OSI has had trouble retaining board members in the past, including one current candidate who resigned mid-term during a previous term. So getting this right is important.

Broader knowledge: What should OSI do about the tens of millions of people who regularly collaborate to build software online (often calling that activity, colloquially, open source) but don't know what OSI is or what it does?

I have no firm answers to this question—there's a lot of room for creativity here. I do think, though, that the organization has in recent years done a lot of good work in this direction, starting in the best way—by doing work to make the organization relevant to a broader number of folks. I hope new board members have more good ideas to continue this streak.

New at OSI

Two of my questions this year focus on changes that are happening inside OSI.

Licensing process: The organization has proposed improvements to the license-review process. What do you think of them? 

Licensing is central to the organization's mission, and it is seeking comments on a plan to improve its process. Board members shouldn't need to be licensing experts, but since they will be asked to finalize and approve this process, they must have some theory of how the board should approach this problem.

OSI initiative on AI: What did you think of the recent OSI initiative on AI? If you liked it, what topics would you suggest for similar treatment in the future? If you didn't like it, what would you improve, or do instead?

The OSI's Deep Dive on AI represents one of the most interesting things the organization has done in a long time. In it, the organization deliberately went outside its comfort zone, trying to identify and bridge gaps between the mature community of open software and the new community of open machine learning. But it was also a big use of time and resources. Many different answers are legitimate here (including "that shouldn't be a board-level decision") but board members should probably have an opinion of some sort.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets New outside forces

Finally, it's important for OSI to carefully respond to what's happening in the broader world of open. I offer three questions to get at this:

Regulation: New industry regulation in both the EU and US suggests that governments will be more involved in open source in the future. What role do you think OSI should play in these discussions? How would you, as a board member, impact that?

The OSI has done a lot of work on the upcoming EU Cyber Resilience Act, joining many other (but not all) open organizations. This will not be the last government regulation that might directly affect open software. How OSI should prioritize and address this is, I think, a critical challenge in the future.

Solo maintainers: The median number of developers on open source projects is one, and regulation and industry standards are increasing their burden. How (if at all) should OSI address that? Is there tension between that and industry needs?

Many of the candidates work at large organizations—which is completely understandable since those organizations have the luxury of giving their employees time for side projects like OSI. But the median open software project is small. I would love to hear more about how the candidates think about bridging this gap, especially when unfunded mandates (both from governments and industry standards) seem to be continually increasing.

Responsible licensing: There are now multiple initiatives around "responsible" or "ethical" licensing, particularly (but not limited to) around machine learning. What should OSI's relationship to these movements and organizations be?

A new generation of developers is taking the ethical implications of software seriously. This often includes explicitly rejecting the position that unfettered source-code access is a sine qua non of software that empowers human beings. OSI does not need to accept that position, but it must have some theory of how to react: silence? firm but clear rejection? constructive alliance? educational and marketing opportunity? 

The Bottom Line

The OSI has come a long way in the past few years and recent board members have a lot to be proud of. But it's still a small organization, in tumultuous times for this industry. (And we've unfortunately had recent reminders that board composition matters for organizations in our space.) Every OSI member should take this vote seriously, so I hope these questions (and the candidate's answers on the OSI blog) help make for good choices.

The OSI is holding its board elections. Here are the important issues facing the Open Source Initiative.

Licensing What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages