Subscribe to feed
Updated: 49 sec ago

Simplify the installation of Drupal modules with Project Browser

Wed, 12/14/2022 - 16:00
Simplify the installation of Drupal modules with Project Browser Nadiia Nykolaichuk Wed, 12/14/2022 - 03:00

Drupal's modular structure lets you extend your website with an endless array of features. Then again, discovering the right module and installing it on your website can be a challenging task for beginners or non-developers.

That's where the Project Browser initiative comes into play!

Project Browser is one of the most exciting initiatives for Drupal. It is intended to make the platform genuinely easy for everyone. Read on to discover what the project goals are, why we're excited about it, how Project Browser works, and when you might see it in Drupal core.

Why Project Browser?

The initiative is aimed at providing an easy process for discovering and installing contributed modules directly from the Drupal admin dashboard with the click of a button. This means you don't have to go to or elsewhere. It is one of the key Drupal strategic initiatives that determine the priorities for the platform's development.

The easy discovery and installation of modules should empower people who are new to Drupal, as well as "ambitious site builders" — a category of Drupal users, often mentioned by Drupal's creator Dries Buytaert, as a strategically important user persona.

Project Browser should eliminate complicated steps and provide more consistency in the process of extending Drupal websites. Leslie Glynn, the co-lead of the Project Browser strategic initiative, said the following as part of the Driesnote at 2022's DrupalCon Portland:

I've been working with Drupal beginners for more than 10 years doing trainings and mentoring at contribution days. I saw firsthand the struggles that folks face once they install Drupal and want to then add functionality. Taking folks to the terminal is hard, the current extend page is overwhelming, it's complex, and there's not a lot of consistency across the project pages. I was super excited to learn about this strategic initiative to make browsing for projects easier.

A good comparison, made by Leslie at DrupalCon Prague was the example of a visit to a grocery store where a shopper is faced with "hundreds and hundreds" of different types of cereal.

How does the individual know which box of cereal or which module to choose if there are something like 60,000 cereal boxes (contributed modules) on offer? Folks need to have some direction with a path to enable them to narrow down their search. Finally, they need a "checkout" — clear and easy steps to install the chosen module on their website and have no worries about version compatibility.

Image by:

(Leslie Glynn, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Making it easier for users to browse for great modules also helps the community highlight and recommend these great modules and, in doing so, attract more people to Drupal. At his DrupalCon Portland keynote, Dries Buytaert said:

People don't really know about these things unless they go to an event or hear somebody else talk about it. And we are even losing people because once they install Drupal core they don't know where to go next. We have an opportunity to make this a lot easier and actually to promote some of these great innovations, we can stand behind them. And the solution obviously is the Project Browser.

How Project Browser works

Project Browser provides a user interface (UI) for browsing contributed Drupal modules with a set of useful features to optimize the process. At the OPEN WEB COMMUNITY - Project Browser Initiative: Where We're At session of DrupalCon Prague, the initiative's co-lead, Chris Wells, shared the minimum viable product (MVP) for Project Browser. It includes the following:

  • Enables users to browse for modules compatible with their website's Drupal version

  • Filters modules by category

  • Adds advanced filtering

  • Provides instructions for downloading and installing modules via Composer*

*Composer is the software package manager that ensures all the right software libraries are installed to run Drupal core and other modules.

However, according to Chris, they are very close to making it possible to download modules automatically for users, with Composer running behind the scenes. For this purpose, they have a separate experimental branch. This means that the work is being done on the functionality that will allow Project Browser users to do without Composer in module installation.

The latest demo of Project Browser at DrupalCon Prague by Srishti Bankar, a Project Browser contributor, showed the module in action. The new Browse tab stands out on the Extend page of the Drupal admin dashboard.

Image by:

(Leslie Glynn, CC BY-SA 4.0)

Project Browser's UI gives you suggestions for contributed modules based on filtering and sorting features. The default sorting is by usage, which means that the most popular modules show up first.

Image by:

(Leslie Glynn, CC BY-SA 4.0)

You can filter modules by categories or search by keyword or module name. Project Browser also provides some recommended default filters that are as follows:

  • Modules that are compatible with the version of the site you are running Drupal on (you no longer need to worry about versions)

  • Modules that have security coverage

Image by:

(Leslie Glynn, CC BY-SA 4.0)

Once you have chosen the module you would like to install, click the Download button to view the instructions for downloading and installing it via Composer.

Image by:

(Leslie Glynn, CC BY-SA 4.0)

However, as mentioned above, in a separate experimental branch for Project Browser, it's already possible to click the Download or Download and Enable buttons and get the desired module downloaded and installed via the admin UI. The only thing left to do is to configure the newly installed module according to your website's needs.

Image by:

(Leslie Glynn, CC BY-SA 4.0)

Try Project Browser

Project Browser is currently a contributed module with an ambitious goal to become part of the Drupal core. Working on it in the form of a contributed module gives the creators more freedom to play with exciting functionality until everything is fully ready. It is currently available for testing and you can install it from Drupal websites via Composer. Alternatively, you can click the Try it now button on the project's page and give it a test drive by spinning up a demo on Gitpod.

The module is also available for contributions so anyone can get involved. The creators actively encourage getting involved. There are contribution opportunities for all levels of expertise, including non-code contributions. For example, every module needs a short non-technical summary and a logo to be displayed in Project Browser. Absolutely anyone can get involved with or without coding skills.

So when might we see Project Browser in the Drupal core? The best way to learn the latest news about the readiness of any planned core functionality is to listen to the latest Driesnote from DrupalCon Prague 2022. Dries mentioned a lot of great progress made on the initiative. He said Project Browser would not be done in time for Drupal 10, but that was never really the plan. However, hopes are that the feature will be included in the Drupal core shortly after Drupal 10 is released — perhaps in 10.2. The exact date is yet to be determined.

Extending Drupal with the most powerful features with a click of a button is an exciting prospect. Drupal deservedly gets plenty of compliments and Project Browser is another benefit to bolster the title of "the most user-friendly CMS". I look forward to see how this rolls out moving forward in the wider community.

The Project Browser initiative is aimed at providing an easy process for discovering and installing contributed modules directly from the Drupal admin dashboard with the click of a button.

Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Experience Linux desktop nostalgia with Rox

Wed, 12/14/2022 - 16:00
Experience Linux desktop nostalgia with Rox Seth Kenlon Wed, 12/14/2022 - 03:00

Rox-Filer is an open source file manager for Linux, once intended for the defunct Rox desktop but now a streamlined application for any window manager or desktop. There hasn't been much activity on the Rox project since 2014, and even then it is mostly in maintenance mode. And that's part of Rox-Filer's charm. In a way, Rox-Filer is a snapshot of an old desktop style that was progressive for its time but has given way to a more or less standardized, or at least conventional, interface.

Install Rox-Filer

On Linux, your distribution's software repository may have Rox available for install. For instance, Debian packages it:

$ sudo apt install rox-filer

If your Linux distribution doesn't package Rox-Filer but you want to try it out, you can compile from source by downloading the source code, installing its build dependencies, and then compiling:

$ sudo dnf install gtk2-devel libSM-devel \
  shared-mime-info glade-libs xterm
$ wget
$ unzip rox*zip
$ cd rox-filer-master
$ ./ROX-Filer/AppRunConfiguring Rox

The Rox file manager is based on the look and feel of RISC OS, an operating system developed by Acorn in Cambridge England (the same group responsible for the popular Arm microprocessor). Today, there's an open source RISC OS you can install on a Raspberry Pi, but for now, Rox is close enough.

Rox has a simple layout. It has no menu bar, but there's a toolbar across the top, and the file panel is at the bottom.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

As with the KDE Plasma Desktop, the default action of a single click in Rox is to open an item, whether it's a folder or a file. Unfortunately, no version of Rox, either packaged or compiled directly from the source, seems to be completely integrated with the mimetype definitions of the modern Linux desktop. For instance, Rox on CentOS renders an error when I click on even a basic text file, while the packaged version of Rox on Debian opens a plain text file but not a JPEG or archive file. You can fix this by setting a Run Action in the right-click context menu.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Setting a run action can have broad definitions, so you don't have to set a separate run action for JPEG, PNG, WEBP, and all other image types, instead set the same run command for all mimetypes starting with image.

Once you set that, you're ready to manage files with Rox.


You can navigate through your file system using the arrow icon in the top toolbar. The Up arrow takes you to the parent directory of your current location (in other words, the folder your current folder is in). To descend into a folder, click on it.

Refreshing the view

Rox may not redraw the screen for every action, so sometimes you may need to prompt it to refresh. Click the Circle arrow in the Rox toolbar to refresh your current location's contents.

Copy or move a file

There are two ways to copy or move a file in Rox. First, you can launch a second Rox window and drag and drop files from one window to the other. When you do, you're prompted to copy, move, or link the item you've dropped.

Alternatively, you can right-click an item and open the File submenu from the context menu. In the File submenu, choose Copy and then enter the destination path for the item you want to move or copy. After you've confirmed that the file has successfully been copied to the target location, you can optionally select the item again, choosing Delete from the File menu.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Options

You can customize some aspects of Rox by selecting Options from the right-click menu. This brings up a Rox configuration screen that's admittedly only partially relevant to Rox. The Rox options assume you're running a window manager, like Windowmaker which provides a traditional dock (or "pinboard" in Rox terminology). I wasn't able to get the pinboard options to work on Fluxbox, my preferred window manager, or Windowmaker. In both cases, the window manager handled iconified windows, and I wasn't able to configure Rox to override the control. It's possible that I wasn't drastic enough in some of my configurations, but considering that Linux window managers are very capable of managing iconified windows, the pinboard mechanism of Rox isn't a vital feature (and probably not as flexible as the window manager's options).

The other options, however, still work as expected. For instance, Rox by default resizes its window size to fit the contents of a folder. When you change from a directory containing twelve items to a directory containing just three, Rox shrinks its footprint. I find this jarring, so I chose the Never automatically resize option, forcing Rox to stay whatever size I set.

Window commands

Some of my favorite features are four menu items hidden away at the bottom of the Window submenu in the right-click context menu. They are:

  • Enter path: Enter an arbitrary path and change directory to it.

  • Shell command: Enter an arbitrary shell command and execute it.

  • Terminal here: Open a terminal at your current location.

  • Switch to terminal: Open a terminal at your current location, and close the Rox.

I love options that allow for quick navigation or quick commands, so it's nice to have these close at hand.


Rox is a "blast from the past," whether or not you've ever used RISC OS or something like it. Rox represents a style of digital file management and even desktop configuration that just doesn't quite exist anymore. I've run Fluxbox, on and off again, at work and at home for the past decade, and I love manually configuring menus and configuration files. However, most of the Linux desktop has moved on from the conventions Rox relies upon. It's not impossible to make Rox fully functional, but it would take a lot of work, and most of what you'd be configuring are already provided by modern window managers and desktops. Even so, Rox is fun to use and experience. It's a great demonstration of how flexible a traditional Linux desktop setup was (and still can be, if you use only a window manager), and much of its charm is in its simplicity. I can't imagine a file manager today not having a dedicated move function, but Rox dares to force you to copy and delete instead. It's a different kind of file manager, and it might not be the one you use all day every day, but it's something you have to try if you miss, or literally missed, the "old days" of the Linux (or RISC OS) desktop.

Rox-Filer is a snapshot of an old desktop style. It was originally intended for the defunct Rox desktop but is now available for any Linux desktop.

Image by:

kris krüg

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use Django to send emails with SMTP

Tue, 12/13/2022 - 16:00
Use Django to send emails with SMTP Sofiia Tarhonska Tue, 12/13/2022 - 03:00

Numerous professions utilize simple mail transfer protocol (SMTP) to deliver emails to their end users. SMTP also retrieves messages, though that has not been its primary use case. Open source frameworks like Django, a Python-based web framework, allows more control for sending emails using functions and expressions.

This article shows how to configure an SMTP server and send emails in Django using SMTP.

Project setup and overview

Before proceeding, this tutorial requires a code editor (such as VS Code or Codium) on your preferred device. 

Start by creating a new directory using the command in the terminal:

mkdir exampledirectory

Then change into the directory using the command:

cd exampledirectory

Within the newly created directory, create a virtual environment using the built-in venv module in the command terminal:

python -m venv

This command creates a virtual environment within the folder created earlier. To activate it, use the following command in the terminal:

On Linux and Mac:

source .virtenv/bin/activate

On Windows:

\Scripts\activateCreating a Django project

After activating the virtual environment, proceed to install the Django package from pip:

pip install django

Create a new Django project:

python -m django startproject NewEmailProject

This command creates a project with the name NewEmailProject. To run the project, head to the project directory (NewEmailProject) and run the server:

python runserver

Open the link for the developmental server in a browser. You see the Django homepage with release notes.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Configuration for sending emails

Next, open the file (in the NewEmailProject folder) to customize configurations for sending emails using Django.

Scroll to the end of the code and update the file with the following code:

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST_PASSWORD = 'your password'

Change the value of the EMAIL_HOST depending on your email client. Here are the acceptable values for common email clients:

  • Gmail:

  • Outlook:

  • Yahoo:

You can change the EMAIL_PORT or leave 465 as the default.

You can use the secure socket layer (SSL) and transport socket layer (TSL) interchangeably as they specify connection security.

To figure out other custom configurations for your email server, check out the full Django Project documentation.

SMTP email backend

The EMAIL_BACKEND expression helps determine the most suitable backend when sending emails through the Django SMTP server. This variable points to smtp.EmailBackend, which receives all the parameters needed for sending emails. It tells Django to send the email to the recipient email using SMTP and not to the console.

Sending emails with SMTP

When the environment is set up and is updated, you can send emails in Django. You can use an HTML form that sends a post request of the necessary information needed for sending an email.

Create a Django application for sending emails:

python startapp mail

Next, open the file and add the Django application (mail) to the INSTALLED_APPS list:

"mail"]Send mail function

In the mail application's file, start by importing the EmailMessage and get_connection from django.core.mail:

from django.core.mail import EmailMessage, get_connection

The EmailMessage class is responsible for creating the email message itself. The get_connection() function returns an instance of the email backend specified in EMAIL_BACKEND.

Now create a function that accepts a POST request, which contains form data submitted from the client side. Followed by the get_connection() functions parameters containing the email configurations created in the project file.

Next, import the settings:

from django.conf import settings

This import allows access to the email configurations created in the Next, create the variables:

subject, recipient_list,

Then you can message, and store the corresponding attributes used in the HTML form. The email_from variable contains the sender email, which is obtained from EMAIL_HOST_USER in the file.

After the variables are processed, the EmailMessage class sends an email using the sends() method, and then closes the connection. The send_email() function renders the home.html file containing the email form.

You can create a templates folder within your mail application and store the HTML files within that folder:

from django.core.mail import EmailMessage, get_connection
from django.conf import settings

def send_email(request):  
   if request.method == "POST":
       with get_connection(  
       ) as connection:  
           subject = request.POST.get("subject")  
           email_from = settings.EMAIL_HOST_USER  
           recipient_list = [request.POST.get("email"), ]  
           message = request.POST.get("message")  
           EmailMessage(subject, message, email_from, recipient_list, connection=connection).send()  
   return render(request, 'home.html')

This is a bootstrap form for generating a message:

<form method="post" action=".">
 {% csrf_token %}
 <div class="mb-3">
     <label for="exampleFormControlInput1" class="form-label">Receipt email address</label>
     <input type="text" class="form-control" name="email" id="exampleFormControlInput1" placeholder="Receipt email address">
   <div class="mb-3">
     <label for="exampleInputSubject" class="form-label">Subject</label>
     <input type="text" class="form-control" name="subject" id="exampleInputSubject">
   <div class="mb-3">
     <label for="exampleFormControlTextarea1" class="form-label">Message</label>
     <textarea class="form-control" id="exampleFormControlTextarea1" name="message" rows="3"></textarea>
   <button type="submit" class="btn btn-primary">Send</button>

This form sends a post request to the send_email() function. This processes the form data and sends the email to the recipients.

Now open the file in the NewEmailProject folder to create the homepage URL. Update the urlpattern list by adding the code path("", send_email) .

Sending email to multiple recipients

To specify multiple recipients when sending the same email, create a new function called send_emails within the file and modify the send function code:

def send_emails(request):  
    if request.method == "POST":
        with get_connection(  
        ) as connection:  
            recipient_list = request.POST.get("email").split()  
            subject = request.POST.get("subject")  
            email_from = settings.EMAIL_HOST_USER  
            message = request.POST.get("message")  
            EmailMessage(subject, message, email_from, recipient_list, connection=connection).send()  
    return render(request, 'send_emails.html')

For the recipient_list variable, I'm using the Python split() method to convert the recipients email string to list so that I can email all of them.

Next, create another HTML file called send_emails.html in the templates folder and use the same form code for the home.html file within it.

To specify multiple email recipients, use a space between each email address:

You should also update the urlpattern list by adding the code:

path("send-emails/", send_email)Sending HTML emails

You can also send HTML emails with Django using a slightly modified version of the send_email function:

html_message = '''<h1>this is an automated message</h1>'''
msg = EmailMessage(subject, html_message, email_from,recipient_list, connection=connection)
msg.content_subtype = "html"
msg.send()Sending emails with attachment

To include attachment to mails, create a variable and put it in the file path in a string like this:

attachment = "mail/templates/example.png"

Then, move the EmailMessage function to a variable and call the attach_file method followed by the send method:

msg = EmailMessage(subject, message, email_from, recipient_list, connection=connection)
msg.send()Django email libraries

This guide on sending emails in Django would not be complete if I didn't mention the email libraries that are available to users. Here are some noteworthy email libraries.

  • Django mailer is a Django app for queuing as it saves emails in a database and sends the mail out at a designated time.
  • Django templated email aims to send templated emails. It offers features like configurable template naming and location, template inheritance, and adding recipients to the CC and BCC lists.
  • Anymail allows the use of an email service provider (ESP). By using django.core.mail, it provides a sustained API that avoids tying your code to a single ESP.
Emailing with Django

Sending emails in Django can sound daunting, but it's as simple as creating a virtual environment. You can add Django to the environment and create an email backend. Finally, you can set up an HTML template. Give it a try.

Django is a Python-based web framework that allows more control for sending emails using functions and expressions.

Image by:

Ribkahn via Pixabay, CCO

Email Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Drupal 10 is worth a fresh look

Tue, 12/13/2022 - 16:00
Drupal 10 is worth a fresh look Martin Anderso… Tue, 12/13/2022 - 03:00

The popular Drupal open source content management system (CMS) reaches a significant milestone when version 10 is released on December 14. Personally, I think Drupal X sounds way cooler, but so far, my calls to name it that haven't gotten much traction. I enlisted the help of my friend Aaron Judd of Northern Commerce to give us a sense of how cool Drupal X could look:

Image by:

Aaron Judd of Northern Commerce

What's a Drupal, anyway?

Drupal is an open source CMS and development framework. While other CMS options focus on simple long-form content (think blogs) or entirely free-form content (like in Wix or Squarespace), Drupal has made a name for itself in handling more complex content architectures, in multiple languages, with robust content governance. Drupal sites (like this site,!) benefit from a strong role-based access control (RBAC) system, unlimited custom roles and workflows, and a powerful and extensible media library.

Here's a rundown for anyone who hasn't kept tabs on what's coming in the newest major version.

A fresh face

Most Drupal sites use custom themes to give them a unique look and feel. Still, the initial experience you have when installing a CMS matters. In Drupal, themes define the look and feel of a site, and you can use different themes for public and administrative experiences. Until recently, the Bartik and Seven themes had been the default face of Drupal for more than a decade. To put that in context, when Bartik was released, the most popular browser in the world was Internet Explorer 8. A lot has changed since then, particularly around best practices for building websites.

In fact, a significant change in Drupal 10 will be the removal of support for Internet Explorer (IE), which is itself no longer supported by Microsoft and hasn't seen major updates since 2013. That may not sound like an improvement, but continued support for IE kept the community from adopting modern markup and styling. For example, thanks to being unencumbered by support for legacy browsers, Drupal 10 includes a new responsive grid layout that's so innovative it got a writeup in CSS Tricks.

Image by:

(Martin Anderson-Clutz, CC BY-SA 4.0)

The new faces of Drupal are two brand new themes: Olivero for visitors and Claro for admins. In addition to being fresh and modern designs, both were developed with accessibility as a top priority.

Improvements under the hood

More than a decade ago, the Drupal community decided to "Get Off the Drupal Island." That meant adopting solutions shared across popular projects and frameworks instead of ones developed and maintained exclusively by the Drupal community. Today Drupal leverages a variety of projects and libraries whose names will be familiar to open source developers who have never touched Drupal: Symfony, Composer, CKEditor, Twig, Nightwatch, and more.

That has brought a variety of powerful capabilities to Drupal and allowed it to contribute back to those solutions, benefitting a broader set of developers. It has also become a determining factor for the cadence of Drupal's major version releases.

To illustrate, consider that Drupal 7 was released in early 2011. Drupal 8 was released almost five years later, towards the end of 2015. Drupal 9 was released in June of 2020, with a key motivator being the move to supported versions of underlying dependencies and removing deprecated code. And now, roughly two and half years later, we're already planning to release Drupal 10. This new major version will leverage updated versions of PHP, Symfony, and Composer, among others.

An all-new editor

An upgrade of particular note is the move to CKEditor 5. Although notionally an incremental update, under the hood CKEditor 5 was completely rewritten, much the same as the transition from Drupal 7 to 8. In addition to a sleeker interface, CKEditor 5 has the potential for exciting new capabilities, such as real-time collaboration. Drupal's CKEditor integration for version 5 has already been augmented with a number of UI enhancements. For example, media placed within content can be configured using an overlaid toolbar ribbon instead of needing to launch a modal dialog to access these settings. Also, the styles dropdown now includes a preview of each type available.

Image by:

(Martin Anderson-Clutz, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources A look ahead

Earlier in 2022, Drupal creator and project lead Dries Buytaert announced a focus on "ambitious site builders." This means that while the community will continue to work on making the developer experience better in general, moving forward there is a particular focus on making it easier to create engaging experiences in Drupal without having to write code or use command-line tools. Three strategic initiatives embody this new focus: Automatic Updates, the Project Browser, and Recipes.

Automatic Updates will reduce the total cost of ownership for Drupal sites and help them be more secure by ensuring they always have the latest core security patches. This will be a major benefit for site owners and Drupal development teams everywhere. However, judging by personal experience, Wednesday night pizza sales may take a hit (traditionally, the Drupal security team releases updates on the third Wednesday of the month). There is now a stable release of Automatic Updates as a contrib module. Work has begun to move this into Drupal core, so all Drupal sites will eventually be able to leverage this capability.

The Project Browser makes Drupal sites easier to build, maintain, and evolve by allowing site builders to search and browse through a subset of Drupal's vast catalog of available modules, prefiltered to the site's Drupal version, for security, stability, and more. A site builder can select, download, and install a module without leaving the site's web interface. In fact, there's an "app store" like interface meant to promote the most popular modules available that are compatible with the current site's version of Drupal. While other CMS options have had similar offerings, this advancement means you don't need to sacrifice ease-of-use to take advantage of the power of Drupal. Also, all of the thousands of modules listed are 100% free. 

For many years Drupal has had a concept of distributions. These are opinionated versions of Drupal designed to meet specific use cases such as media publishing, fundraising, intranet portals, and more. While distributions have proven an excellent way to accelerate initial development, in practice, they have been known to require significant work to maintain and create extra work for site owners during maintenance. The Recipes initiative aims to make more granular, composable functionality available when building a site. Want to add a staff directory, events calendar, or locations map to your site? In the future, this will be as easy as installing a recipe and then customizing it to meet your site's specific needs.

It's an exciting time to try Drupal

Drupal 10 is the culmination of work contributed by thousands of dedicated and talented community members worldwide. If you're not already using Drupal, we hope you'll try it out for your next project. There's a common saying among Drupalists: "Come for the code, stay for the community."

Drupal 10 is chockful of useful features, a fresh look, and a brand-new editor.

Image by:

Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Try this Linux web browser as your file manager

Tue, 12/13/2022 - 16:00
Try this Linux web browser as your file manager Seth Kenlon Tue, 12/13/2022 - 03:00

Konqueror is a file manager and web browser for the KDE Plasma Desktop. In many ways, Konqueror defined "network transparency," as it applied to a personal desktop. With Konqueror, you can browse remote network files (including the Internet itself, which really is just a collection of remote files viewed through a fancy lens) just as easily as browsing your local files. Sometimes there was some configuration and setup required, depending on what kind of file share you needed to access. But ultimately, the goal of having instant access to all the data you had permission to view was a reality with Konqueror in ways no other file manager had achieved. And at its peak, the open source web engine it developed (KHTML) was adopted by both Apple and Google, and lives on today as the core library of modern web browsing and, technically, Electron app development.

Today, the KDE Plasma Desktop lists Konqueror as a web browser. Officially, file management has shifted over to Dolphin, but Konqueror is still capable of doing the job. For the full and classic Konqueror experience, you should try the Plasma Desktop 3.x fork TDE, but in this article I use Konqueror in KDE Plasma Desktop version 5.

Install Konqueror

If you're running KDE Plasma Desktop already, you may already have Konqueror installed. If not, you can install it from your distribution's software repository. On Fedora, CentOS, Mageia, OpenMandriva, and similar:

$ sudo dnf install -y konqueror konqueror-plugins

On Debian, Linux Mint, Elementary, and similar:

$ sudo apt install -y konqueror konqueror-plugins Image by:

(Seth Kenlon, CC BY-SA 4.0)

Configure Konqueror as a file manager

The most convenient feature of Konqueror is that it's a web browser in addition to being a file manager. Or at least, that's theoretically its most convenient feature. If you're not using Konqueror as a web browser, then you may not want the URL field or the search engine field at the top of every file manager window.

As with most KDE applications, Konqueror is highly configurable. You can reposition and add and remove toolbars, add or remove buttons, and so on.

To adjust what toolbars are displayed, launch Konqueror and go to the Settings menu and select Toolbars Shown. The Main toolbar is probably all you really need for file management. It's the toolbar with navigation buttons on it. However, you may not even need that, as long as you're happy to navigate with keyboard shortcuts or using the Go menu.

Keyboard navigation in Konqueror is the same as in Dolphin:

  • Alt+Left arrow: Back one step

  • Alt+Up arrow: Move to parent directory

  • Alt+Home: Go to home directory

Side panel

To get a side panel with a listing of common folders, press F9 or select Show Sidebar from the Settings menu. This adds a button bar along the left side of the Konqueror window. Click the Home icon to display a file tree of your home directory.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

As the button bar suggests, this side panel can serve many purposes. Instead of your home directory, you can display bookmarked locations, a history of recent locations you've visited, remote filesystems, and more.


Some people are used to an application menu. It's efficient and quick, and always in the same place. Other people prefer to launch applications from the terminal.

There's yet another way to view application launchers, though. Konqueror's Go menu allows you go to a meta location called Applications, which lists application launchers, by category, as files in a file manager.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

You can see this in Dolphin, too, by manually typing applications: in the location field, but of the two it's Konqueror that provides a menu option to go there directly.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Network folders

Similarly, Konqueror also provides a menu selection to go to network folders. The greatest network folder of them all is the Internet, but Network Folders is the meta location for network protocols other than HTTP. Most remote locations require some setup because they usually require authentication to access. Most of them can be configured through System Settings, including file systems accessible over Bluetooth, SMB or CIFS, MTP devices, Fish (file system over SSH), and even Google Drive.

Split view

You can split the Konqueror window into panes, allowing you to see two folders at once without opening two windows. There are two split options: a vertical split with one pane on the left and the other on the right, or a horizontal split with one pane above the other.

To split the Konqueror window, go to the Window menu and select either Split View Left/Right or Spit View Top/Bottom. Each pane is independent of the other, so you can navigate around in one pane, and then drag and drop files from one to the other.

Conquering your file system

Konqueror isn't just a file manager, and I don't think the developers of the Plasma Desktop expect you to use it as your primary file manager. There's even an option in the File menu to open a location in Dolphin, which indicates that Konqueror is a web browser with a file manager component. But that file manager component is a nice feature to have when you need it. And if you're not a fan of all the features Dolphin offers, Konqueror could be a suitable alternative.

The KDE Plasma Desktop lists Konqueror as a web browser, but it is also a functional Linux file manager.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A sysadmin's guide to Carbonio

Mon, 12/12/2022 - 16:00
A sysadmin's guide to Carbonio Arman Khosravi Mon, 12/12/2022 - 03:00

Carbonio Community Edition (Carbonio CE) is an open source no-cost email and collaboration platform by Zextras. It provides privacy for organizations seeking digital sovereignty by using on-premises self-hosted servers. Using self-hosted servers offers a deeper level of control over infrastructure and data. However, it requires more attention to server configurations and infrastructure management to guarantee data sovereignty. Tasks done by system administrators play an important role in this matter. This makes administrative tasks a crucial part of achieving digital sovereignty, therefore, an administrative console dedicated to such tasks becomes extremely valuable to facilitate sysadmins' everyday jobs.

This is why Zextras launched the first release of its own admin panel for Carbonio CE on October 2022. For Carbonio CE system administrators, it is the first step toward the creation of an all-inclusive admin console.

In this article, I go into detail about the Carbonio CE Admin Panel and take a deeper look into what it can accomplish.

Image by:

(Arman Khosravi, CC BY-SA 4.0)

What is the Carbonio CE Admin Panel?

The Carbonio CE Admin Panel is designed to assist the Carbonio CE system administrators with the most repetitive and frequent tasks such as user management and domain configuration. It is a browser-based application that runs on a particular port and is available for system administrators to use in production environments as soon as Carbonio CE is installed.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles Why do you need the admin panel?

Everything done in Carbonio CE Admin Panel can be done through the command-line interface as well. This raises the question: why might system administrators prefer using the admin panel rather than the command-line interface?

Using the admin panel has its own obvious advantages such as:

  • Making repetitive activities much easier to perform

  • Saving system administrators' time monitoring servers

  • Providing a much easier learning process for junior system administrators

Even though using the admin panel makes administrative tasks easier to perform, there is more to using this native user interface for Carboino CE. In essence, the Carbonio CE Admin Panel gives you the ability to monitor and manage your organization server from a single centralized location. Even when you're far away, you may still access your admin panel to check the status of servers and carry out various administrative activities.

Creating and managing user accounts

Managing users has always been one of the most, if not the most, performed action by sysadmins. Therefore it should be an essential part of every administrative GUI available for system administrators. Suppose you, as the system administrator of the company have received some request by users to edit some information on their account. For instance, giving them access to some features, or your company has hired new employees, or some employees have left the company. All these scenarios require a sysadmin to manage user accounts frequently.

Using the Carbonio CE Admin Panel you can simply go to Domains > select a domain > Accounts and select any account to modify, or press the + button to add a new account.

Image by:

(Arman Khosravi, CC BY-SA 4.0)

Creating and managing mailing lists

Besides creating user accounts, a system administrator is often required to create different mailing lists that reflect the organizational structure of the company. Using mailing lists, users can simply send emails to a group of users by inserting the list address instead of every user address one by one.

Creating mailing lists in Carbonio CE is extremely easy using the admin panel. You need to go to Domains > select a domain > Mailing List > press the + button. You can now use the wizard that opens up to create a mailing list.

Image by:

(Arman Khosravi, CC BY-SA 4.0)

The essential steps to follow are:

  • Insert the name

  • Insert the mailing list address

  • Press NEXT

  • Insert the members

  • Press CREATE.

You can follow the same path to edit mail lists created before.

Creating and managing domains

Managing domains is another task frequently done by system administrators. Similar to accounts, creating a domain is very easy in the Carbonio Admin Panel. You only need to go to Domains > select a domain > under the details and find different entries to monitor the status of the domain. To create a new domain simply click on the CREATE button on the top bar and select Create New Domain and insert the necessary information such as:

  • Domain name

  • Maximum number of accounts and maximum email quota

  • Mail server where the domain is hosted

Image by:

(Arman Khosravi, CC BY-SA 4.0)

Creating and managing mailstore servers

The Carbonio CE Admin Panel allows system administrators to manage different servers present in the infrastructure and provide them with different tools to configure them. To monitor a new mailstore server you can go to Mailstores > Servers List and find all the available mailstore servers in your infrastructure in a list (when just one server is installed, only one server is shown in this area).

Under Server Details, you can select any of the available servers in the list and select Data Volumes to show more details of the storage volumes attached to it. While multiple volumes can be mounted simultaneously, only one primary volume, one secondary volume, and one index volume can be designated as active. You can add new volumes using the NEW VOLUME + button in the same section. You can also change the volume properties simply by clicking on them to open their detail window.

Image by:

(Arman Khosravi, CC BY-SA 4.0)

Creating and managing classes of service

Another scenario that can be facilitated with the help of the admin panel is creating classes of service (COS). After the installation, a system administrator might need to create different classes (groups) and assign different properties to them. This way, later in the process each user or a group of users can be easily nominated to a class of service in order to have access to the features and properties assigned to that specific COS.

Image by:

(Arman Khosravi, CC BY-SA 4.0)

To create a COS simply click on the CREATE button and select Create New COS or alternatively go to COS on the left panel and click on CREATE NEW COS +. You can then insert the name of the COS and define the different services available to this specific class.

To edit a COS, go to COS on the left panel and select a COS from the dropdown menu at top of the list.

You can define settings like quotas, the mail servers that can host accounts from this COS, or enable features for this COS. You can also define features for general features like Mail, Calendar, and Contacts. Additional customizable options include Tagging, Out of Office Reply, Distribution List Folders, and so on.

Image by:

(Arman Khosravi, CC BY-SA 4.0)


In this article, you saw a few scenarios in which the Carbonio CE Admin Panel saves you time and effort. The admin panel is an evolution of classical administrative tools in a new and centralized interface that gives the possibility of accessing different functionalities and monitoring tools from the same location.

Carbonio Community Edition is an open source collaboration tool. Here's how to use its admin panel.

Image by:

Tools Alternatives Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A Linux file manager for Vim fans

Mon, 12/12/2022 - 16:00
A Linux file manager for Vim fans Seth Kenlon Mon, 12/12/2022 - 03:00

Ranger is a terminal-based file manager that uses Vim-like keyboard commands. If you're working in a terminal all day, running Sed and Awk commands and using Vim, then you might want a way to manage files without leaving the comforting glow of your amber-on-black screen. There are, of course, the ls and cd commands, but sometimes you want to "walk through" your system, or maybe you want to mimic a graphical experience without the graphics.

Install Ranger

On Linux, you may find Ranger in your Linux distribution's software repository. On Fedora, CentOS, Mageia, and similar:

$ sudo dnf install ranger

For instance, on Debian, Elementary, Linux Mint, and similar:

$ sudo apt install ranger

On macOS, use Homebrew or MacPort.

Using Ranger

If you use Vim, then Ranger is the terminal-based file manager for you. Sure, there's the NERDTree plugin for use while in Vim, but Ranger has all the conveniences of a file manager plus the interface conventions of Vim. If you don't know Vim (yet), then Ranger can serve as a nice introduction to the way Vim is operated.

Launch Ranger from a terminal:

$ ranger

Your terminal is now the Ranger interface, and by default, it lists the contents of your current directory.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

By default, Ranger uses a three-column layout. From left to right:

  • List of home directories on your system.

  • List of directories.

  • Contents of the selected directory.

The basic interactions with Ranger can be performed with either your arrow keys or with the classic Vim navigation controls of the hjkl keys.

  • Up or K: Move to the previous item in a list, making it the active selection.

  • Down or J: Move to the next item in a list, making it the active selection.

  • Right or L: Move into a directory or open a file.

  • Left or H: Move to the parent directory.

If all you need to do is search for a file and open it, then you now know as much about Ranger as you need to know.

Of course, managing files is more than just navigating and opening files. The most common file management tasks have single-key shortcuts assigned to them, and you can view a complete list by pressing ? and then K for key bindings.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Select a file

You can select (or "mark," in Ranger terminology) a file or directory three different ways.

First, whatever happens to be highlighted in the Ranger window is considered the active selection. By default, any action you take is performed on your active selection. When you first launch Ranger, the active selection is probably the Desktop folder, which is usually at the very top of the list in your home directory. If you press the Left arrow, then you move into the Desktop folder and whatever item is at the top of that list becomes the active selection. This is the keyboard version of clicking on a file or folder in a graphical file manager.

The other way is to mark several files at once. To mark your current selection, press Spacebar. The item you've marked is indented one space and changes color, and your cursor is moved to the next item in the list. You can press Space again to select that item or move to a different item and press Space on it to add to your marked selection. This is the keyboard version of drawing a selection box around several items in a graphical file manager.

The third way is to select all items in a folder. To select everything, press v on your keyboard. To unselect everything, press v again.

Copy a file

To copy (or "yank," in Ranger terminology) your current selection (whether it's a single file or several files), press y y on your keyboard (that's the letter y twice in a row.)

To paste a file that you've copied, navigate to your target directory and press the p key twice in a row.

Move a file

To move a file from one location to another, press d twice in a row. Then move to your target location and press p twice.


In Ranger as in Vim, you can drop out of the normal mode of interaction to enter a command by pressing the : key.

For instance, say you've descended deep into a series of directories and subdirectories, and now you want to get to your Zombie_Apocalypse folder quickly. Press : and then type cd ~/Zombie_Apocalypse and press Return. You're instantly taken to the ~/Zombie_Apocalypse folder.

There are lots of commands to choose from, and you can see them all by pressing ? and then c for commands.


Ranger is a tabbed interface, just like your web browser. To open a new tab, press Ctrl+N. A tab number appears in the top right of the Ranger interface. To move between tabs, press Alt along with the number of the tab you want to switch to.

Split window

While the default view of Ranger is hierarchical columns, you can use an alternate view that splits the Ranger interface into two (or more) panels. This is useful for moving files from one directory to another quickly, or for comparing the contents of directories.

To split Ranger's interface, you change the view mode to multipane. This is done with the command set. To run a command in Ranger, you press : and then type your command:

:set viewmode multipane

When you split the Ranger interface, you're actually expanding tabs into one view. Instead of columns, you're now looking at two or more distinct single columns.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Instead of moving between these columns with Right and Left arrows (or h and l), you switch between tabs with Alt and a number. For instance, if you have three tabs open as split panes, the left column is Alt+1, the middle is Alt+2, and the right column is Alt+3.

A file manager for Vim users

Ranger is an obvious choice for you if you're a longtime Vim user, or if you just want to immerse yourself in the Vim style of working. The transition from Vim to Ranger and back again is nearly seamless, so fire up your favorite multiplexer and launch Ranger and Vim for easy access.

Use the Ranger file manager without leaving the comfort of your Linux terminal.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Simplify your Linux PC with the PCManFM file manager

Sun, 12/11/2022 - 16:00
Simplify your Linux PC with the PCManFM file manager Seth Kenlon Sun, 12/11/2022 - 03:00

The PCMan File Manager, or PCManFM for short, is a fast and lightweight file manager that's full of features. It was developed for the LXDE desktop environment, but is a stand-alone application and can be used with the desktop or window manager of your choice.

Install PCManFM

On Linux, you can probably find PCManFM in your software repository. For instance, on Fedora, Mageia, and similar:

$ sudo dnf install pcmanfm

On Debian, Elementary, and similar:

$ sudo apt install pcmanfm Image by:

(Seth Kenlon, CC BY-SA 4.0)

PCManFM doesn't have to replace your desktop's file manager, but some distributions assume that when you install a "third party" file manager, you want it to take precedence over the default. Depending on the desktop you're using, there are different ways of setting your default file manager. Usually, it's in System Settings under Default Applications.

If your desktop environment or window manager has no interface to select your default applications, you can set your preference in the ~/.local/share/applications/mimeapps.list file. To designate a file manager as default, place it at the top of the [Default Applications] section, first specifying the file type and then the name of the application file (as it appears in /usr/share/applications) you want to use to open it:


If you're a fan of GNOME 2 or the Mate project's Caja file manager, then PCManFM is a great alternative to consider. PCManFM is a lot like Caja in design, but it's not bound to the desktop in the way Caja is, so it's available even on the latest GNOME desktop.

The default layout of PCManFM has a helpful toolbar near the top of the window, a side panel providing quick access to common directories and drives, and a status bar with details about your current selection. You can hide or show any of these elements using the View menu.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Tabs and panels

PCManFM also uses tabs. If you've never used a tabbed file manager before, then think of a web browser and how it uses tabs to let you open multiple web pages in just one window. PCManFM can similarly open several directories in the same window.

To transfer a file or folder from one tab to another, just drag the file's icon to the tab and hover. After a nominal delay, PCManFM brings the target tab to the front so you can continue your drag-and-drop operation. It takes some getting used to if you're not used to interacting with tabs in a file manager, but it doesn't take long, and it's a very powerful feature for decluttering your workspace.

Another nice feature of the PCManFM interface is its ability to split a window into two panels. Each panel is effectively a tab, but each one only takes up half of the window.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

This makes dragging from one to the other just as easy and natural as dragging a file into a folder. I find it useful for comparing the contents of folders, too.

File management with PCMan

PCManFM is a great little file manager with all the basic features you need on an everyday basis. It's a natural replacement for a file manager you might find too complicated, as well as a great option on old computers that might struggle with a file manager that's constantly drawing thumbnails and refreshing and generating animations. PCMan focuses on the core task of a file manager: managing files. Try it out on your Linux PC.

PCMan File Manager is a great option for making old computers feel more efficient.

Image by:

LSE Library. Modified by CC BY-SA 4.0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to use the Linux file manager for GNOME 2

Sat, 12/10/2022 - 16:00
How to use the Linux file manager for GNOME 2 Seth Kenlon Sat, 12/10/2022 - 03:00

Before GNOME 3 there was (unsurprisingly) GNOME 2, which had gained an ardent fanbase during its reign as one of the common default Linux desktops. The Mate project (named after the yerba mate plant) began as an effort to continue the GNOME 2 desktop, at first using GTK 2 (the toolkit GNOME 2 was based upon) and later incorporating GTK 3. Today, Mate delivers a familiar desktop environment that looks and feels exactly like GNOME 2 did, using the GTK 3 toolkit. Part of that desktop is the Caja file manager, a simple but robust application that helps you sort and organize your data.

Install Caja

Caja isn't exactly a stand-alone application. It's tightly coupled to the Mate desktop, so to try it you must install Mate.

You may find Mate included in your Linux distribution's software repository, or you can download and install a distribution that ships Mate as its default desktop. Before you do, though, be aware that it's meant to provide a full desktop experience, so many Mate apps are installed along with the desktop. If you're running a different desktop, you may find yourself with redundant applications (two PDF readers, two media players, two file managers, and so on). To evaluate Caja without making major changes to your computer, install a Mate-based distribution in a virtual machine using GNOME Boxes.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Clear layout

The thing that you're likely to notice first about Caja is its clear and direct layout. There's a toolbar across the top of the Caja window with buttons for common tasks. I love this kind of design. Function isn't hidden away in a right-click menu, nor discoverable only after an action, nor buried in a menu. The "obvious" actions for the window are listed right across the top.

Under the main toolbar is the location bar. This displays your current path, either as a series of buttons or as editable text. Use the Edit button to the left of the path to toggle whether it's editable or not.


For longtime users of GNOME 2 or Caja, the main toolbar can be redundant, especially once you know the keyboard shortcuts to invoke common actions. That's why the Caja interface is configurable. You can disable major components of the Caja window from the View menu, including:

  • Main toolbar

  • Location bar

  • Side panel

  • Extra panel

  • Status bar

In short, you can make Caja as minimal as you want it to be.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Tag your folders

Some people are "visual" people. They like to organize files and folders according to how they perceive their data, rather than how the computer interprets it. For instance, if the two most significant folders for you are Music and Work, it can be hard to convince a computer that there's any relationship between those two. Alphabetically, there's a lot that should get started between the two, and the contents of each may be completely different (media files in one, spreadsheets in another).

Caja offers some assistance.

With Caja, you can place directories manually within a window, and Caja remembers that placement. What's more, Caja has a variety of emblems available for you to use as visual labels. You can find them in the Edit menu, in Backgrounds and Emblems. Drag and drop them onto files and folders to help them stand apart.

Image by:

(Seth Kenlon, CC BY-SA 4.0)


As file managers go, Caja is one of the most inviting. It's configurable enough to appeal to many different use cases, and in those configuration options, you're likely to find a workflow that works for you. If you're a fan of GNOME 2, then you're sure to find Caja familiar, and if you've never used GNOME 2 then you might just find your new favorite desktop in Mate.

If you're a fan of GNOME 2, then you're sure to find Caja familiar, and if you've never used GNOME 2 then you might just find your new favorite desktop in Mate.

Image by:

Gunnar Wortmann via Pixabay. Modified by CC BY-SA 4.0.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Install open source solar power at home

Fri, 12/09/2022 - 16:00
Install open source solar power at home Joshua Pearce Fri, 12/09/2022 - 03:00

You might have already given some thought to powering your home with solar. Solar photovoltaic panels, which convert sunlight directly into electricity, have fallen so far down in cost that it makes economic sense everywhere. That is why large companies have put in a lot of solar, and even the electric utilities have started installing massive solar farms—it simply costs less than antiquated fossil fuels. Like most homeowners, you would like to save money and eviscerate your electric bill, but you are probably cringing a bit at the upfront cost. To get a rough idea of the cost, a 5kW system that would power an average home installed at $3/W would cost about $15,000, while a larger home might need 10kW to offset all of their electricity purchases and cost $30,000. If you want batteries, double the cost (you don’t need batteries as most solar arrays connect to the grid, but if the grid goes down, so does your solar array until it is turned back on.) Paying for all your electricity for the next several decades is an investment, even if you save a lot of money.

There is some good financial news. First, both the US and Canada have enacted a 30% tax credit for solar. This credit drops the price down to about $2/W. Second, previously discussed how you could get a free book, To Catch the Sun, that walks you through how to design your own system (you will still need a certified electrician and inspections to attach it to the grid). If you are a little handy, you can cut the remaining cost by about 50%. These costs are primarily for materials, including solar panels, wiring, electronics, and racking. Amazingly, solar panel costs have dropped so low for small solar systems (like the ones for your house) the racking (mechanical structures that hold the solar panels up) can cost more than the panels!

Open source to the rescue again

Applying the open source development paradigm to software results in faster innovation, better products, and lower costs. The same is true of open source hardware—and even in the relatively obscure area of photovoltaic racking. Nearly all commercial photovoltaic racking is made from proprietary odd aluminum extrusions. They cost a lot of money. If you have a bit of unshaded backyard, you have a few open source racking solutions to choose from.

Open source solar rack designs

The first DIY solar rack design meets the following criteria: (1) made from locally-accessible renewable materials, (2) 25-year lifetime to match solar warranties, (3) able to be fabricated by average consumers, (4) able to meet Canadian structural building codes (if you live where there is no snow this is a bit overkill, but, hey, you might have other weather extremes like hurricanes to deal with), (5) low cost and (6) that it is shared using an open source license. The open source wood-based fixed-tilt ground-mounted bifacial photovoltaic rack design works throughout North America. The racking system saves from 49% to 77% compared to commercial proprietary racking. The racking design, however, is highly dependent on the cost of lumber, which varies worldwide.

Check your local cost of wood before you dive into this open source design.

Image by:

(Joshua Pearce, CC BY-SA 4.0)

If you are even more adventurous, you might consider this second design that allows you to change the tilt angle. The results of the second study show the racking systems with an optimal variable seasonal tilt angle have the best lifetime energy production, with 5.2% more energy generated compared to the fixed-tilt system (or 4.8% more energy, if limited to a maximum tilt angle of 60°). Both fixed and variable wooden racking systems show similar electricity costs, which are only 29% of that of proprietary commercial metal racking. The variable tilt rack provides the lowest cost option even when modest labor costs are included and also may provide specific advantages for applications such as agrivoltaics (i.e., you can garden underneath the panels and amazingly get increases in yields for shade-tolerant crops like lettuce). This design has been certified by OSHWA with CERN-OHL-S-2.0 licenses.


Image by:

(Joshua Pearce, CC BY-SA 4.0)

There is about 1kW for each of the 2 PV module racks shown. So a house would need about five of them. Both papers provide full calculations and step-by-step build instructions.

As anyone with a solar system will tell you, getting a negative electricity bill is pretty rewarding. This happens if you size your system to meet all of your load and live in a net-metered part of the country. Please note that the electric utilities don’t pay you; the credit carries over until you use it in the winter.

Have fun with a little open source solar!

Check out these two open source designs for solar power wood racks you can build for your home.

Image by:

Photo by Jason Hibbets

Sustainability Hardware What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A Linux file manager for Emacs fans

Fri, 12/09/2022 - 16:00
A Linux file manager for Emacs fans Seth Kenlon Fri, 12/09/2022 - 03:00

In 2009, I was working hard at a startup in Pittsburgh, and in the late evenings of coding, I developed a GNU Emacs habit. The thing about Emacs is that it's just too versatile to close. Whether you're writing code, writing articles about open source, jotting down a task list, or even playing music, you can do it all from within Emacs. And every time you think you've found a task outside of Emacs, you discover an Emacs mode to prove you wrong. One of my favorite reasons to not close Emacs is its file manager, called directory editor or just Dired.

Install GNU Emacs

Dired is included with Emacs, so there's no install process aside from installing Emacs itself.

On Linux, you can find GNU Emacs in your distribution's software repository. On Fedora, CentOS, Mageia, and similar:

$ sudo dnf install emacs

On Debian, Linux Mint, Elementary, and similar:

$ sudo apt install emacs

On macOS, use Homebrew or MacPort.

For Windows, use Chocolatey.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

File management with Dired

Dired mode is a text-based file management system. It can run in the graphical version of Emacs or in the terminal version of Emacs, making it a flexible, lightweight, and approved for use during a zombie apocalypse.

To launch it, press Ctrl+X and then d. You're prompted in the mini buffer (the field at the bottom of the Emacs window) for the directory you want to open. It defaults to your home directory (~).

total used in directory 40 available 88.1 GiB
drwx------. 17 tux  tux  4096 Sep 20 15:15 .
drwxr-xr-x.  5 root root   42 Sep 14 05:29 ..
-rw-------.  1 tux  tux   938 Sep 20 15:28 .bash_history
-rw-r--r--.  1 tux  tux    18 Nov  6  2021 .bash_logout
-rw-r--r--.  1 tux  tux   141 Nov  6  2021 .bash_profile
-rw-r--r--.  1 tux  tux   492 Nov  6  2021 .bashrc
drwxr-xr-x. 16 tux  tux  4096 Sep 20 14:23 .cache
drwx------. 16 tux  tux  4096 Sep 20 14:51 .config
drwxr-xr-x.  2 tux  tux    59 Sep 20 15:01 Desktop
drwxr-xr-x.  2 tux  tux     6 Sep 15 15:54 Documents
drwxr-xr-x.  3 tux  tux   166 Sep 20 15:12 Downloads
-rw-r--r--.  1 tux  tux   334 Oct  5  2021 .emacs
drwx------.  2 tux  tux     6 Sep 20 14:25 .emacs.d
-rw-------.  1 tux  tux    33 Sep 20 15:15 .lesshst
drwx------.  4 tux  tux    32 Sep 15 15:54 .local
drwxr-xr-x.  6 tux  tux    81 Sep 15 16:03 .mozilla
drwxr-xr-x.  2 tux  tux     6 Sep 15 15:54 Music
drwxr-xr-x.  2 tux  tux    59 Sep 20 14:52 Pictures

The file listing provided looks familiar to anyone accustomed to ls -l in a terminal. From left to right:

  • Identifies the entry as a directory, if applicable, and then lists the file permissions

  • The number of hard links to the entry (for example, the Desktop entry has 1 hard link representing itself, and 1 file in it)

  • User

  • Group

  • Disk space used, in bytes

  • Time last modified

  • File name

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Navigation

To navigate Dired, you can use either the arrow keys or standard Emacs key bindings. For this article, I use Emacs notation: C- for Ctrl and M- for Alt or Meta.

  • C-p or Up arrow: Previous entry in list

  • C-n or Down arrow: Next entry in list

  • Enter or v: Descend into the selected directory

  • ^: Move "up" the directory tree to the current directory's parent

Refreshing the view

Dired doesn't redraw the screen for every action, so sometimes you may need to prompt it to refresh. Press g to redraw a Dired listing.

Open a file

One of the reasons you use a file manager is to find a file and then open it. Emacs can't open every file type, but you might be surprised at just how much it can handle. Then again, not everything it can handle is necessarily useful to you. For instance, it's nice that Emacs can open a JPEG but I rarely view a JPEG in Emacs, and I certainly don't use it to edit a JPEG.

Assuming you're considering the types of files you find Emacs useful for, you can open them directly from Dired. That includes text files (Asciidoc, Markdown, HTML, CSS, Lua, Python, and so on) as well as compressed TAR archives.

To close a file that you've opened, use the C-x C-k Emacs binding to invoke the kill-buffer function.

Copy a file

To copy a file from one directory to another, press C (that's the capital letter C, not the Ctrl key). You're prompted to provide a destination directory and file name in the mini buffer at the bottom of the Emacs window.

Move a file

Moving a file is, confusingly, renaming a file (the exact opposite terminology used in Linux, where renaming a file is actually moving a file.) I've used Dired for years and I still fail to remember this linguistic quirk.

To rename a file, whether you're renaming it back into its current directory or renaming it to some other directory, press R (capital R.) You're prompted to provide a destination directory and a new file name in the mini buffer at the bottom of the Emacs window.

Selecting files

There are a few ways to mark selections in Dired. The first is to have your cursor on the same line as a file or directory entry. If your cursor is on the same line as an entry, then that entry is considered the implicit selection. Any action you take in Dired that targets a file targets that one. This includes, incidentally, "marking" a file as selected.

To mark a file as selected, press m while your cursor is on its line. You can mark as many files as you want, and each one is considered selected. To deselect (unmark) a file, press the u key.

Yet another way to select multiple lines at once is to use a specialized selection function. Dired has several, including dired-mark-directories to mark all directories in a list, dired-mark-executables to select all binary executables in a list, dired-mark-files-regexp to mark files containing a regex pattern, and more. If you're not a regular Emacs user, this is a considered advanced because it requires you to invoke Emacs functions, but here's how to do it and what to look for.

Suppose you want to select all directories in a list:

  1. Press M-x to activate the mini buffer prompt.

  2. Type dired-mark-directories and press Return on your keyboard.

  3. Look at the mini buffer. It tells you how many directories have been marked, and then it tells you that you can invoke this function again in the future with * / key combination.

Any function in GNU Emacs that has a key binding associated with it reveals the keys to you after you've invoked it in its long form.

Creating an archive

To create an archive of a file or a selection of files, press c (that's a lower-case c, not Ctrl). If you have nothing selected (or "marked" in Emacs terminology), then the current line is compressed. If you have files marked, then they're compressed into a single archive. In the mini buffer at the bottom of the Emacs window, you're prompted for a file name and path. Luckily, Emacs is a smart application and derives the target file type from the name you provide. If you name your archive example.tar.xz, then Emacs creates a TAR archive with lzma compression, but if you name it then it creates a ZIP file.

Cancel an action

Should you accidentally invoke a function you don't want to finish, press C-g (that's Emacs notation for Ctrl+G.) Depending on where you are in the course of the function, you may have to press C-g in the mini buffer specifically to stop it from prompting you to continue. This is true for Emacs as a whole, so learn this valuable trick for Dired and carry it over to every mode you use.

Emacs is always open

To quit Dired, you press C-x C-k to kill the Dired buffer, just as you kill any Emacs buffer.

To quit Emacs altogether, press C-x C-c.

Dired is a very capable file manager, and I've only covered the basics here. For a full list of what Dired can do, press they h key.

I think Dired is probably most useful to those using or intending to use Emacs. I probably wouldn't choose it as a general-purpose file manager on a graphical system, because there are so many great alternatives already configured to work with the rest of the system when opening files. Of course, Emacs is infinitely configurable, so if you really enjoy Dired you can set it to do whatever you want it to do.

For a headless system, though, I find that Dired makes a great file manager. Emacs is such a robust operating environment as it is, and Dired only adds to its versatility. With Emacs open, you have a built-in file manager, shell, multiplexer, text editor, and file previewer. You could very nearly use Emacs essentially as your login shell.

Dired is a good text-based file manager, and well worth a look.

Dired is included with Emacs and is a useful text-based file manager on Linux.

Linux Emacs What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 pro tips for using the GDB step command

Thu, 12/08/2022 - 16:00
7 pro tips for using the GDB step command Alexandra Thu, 12/08/2022 - 03:00

A debugger is software that runs your code and examines any problems it finds. GNU Debugger (GBD) is one of the most popular debuggers, and in this article, I examine GDB's step command and related commands for several common use cases. Step is a widely used command but there are a few lesser known things about it which might be confusing. Also, there are ways to step into a function without actually using the step command itself such as using the less known advance command.

No debugging symbols

Consider a simple example program:

int num() {
return 2;
void bar(int i)
printf("i = %d\n", i);
int main()
return 0;

If you compile without the debugging symbols first, set a breakpoint on bar and then try to step within it. The GDB gives an error message about no line number information:

gcc exmp.c -o exmp
gdb ./exmp
(gdb) b bar
Breakpoint 1 at 0x401135
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, 0x0000000000401135 in bar ()
(gdb) step
Single stepping until exit from function bar,
which has no line number information.
i = 2
0x0000000000401168 in main ()Stepi

It is still possible to step inside the function that has no line number information but the stepi command should be used instead. Stepi executes just one instruction at a time. When using GDB's stepi command, it's often useful to first do display/i $pc. This causes the program counter value and corresponding machine instruction to be displayed after each step:

(gdb) b bar
Breakpoint 1 at 0x401135
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, 0x0000000000401135 in bar ()
(gdb) display/i $pc
1: x/i $pc
=> 0x401135 <bar+4>: sub $0x10,%rsp

In the above display command, the i stands for machine instructions and $pc is the program counter register.

It can be useful to use info registers and print some register contents:

(gdb) info registers
rax 0x2 2
rbx 0x7fffffffdbc8 140737488346056
rcx 0x403e18 4210200
(gdb) print $rax
$1 = 2
(gdb) stepi
0x0000000000401139 in bar ()
1: x/i $pc
=> 0x401139 <bar+8>: mov %edi,-0x4(%rbp)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Complicated function call

After recompiling the example program with debugging symbols you can set the breakpoint on the bar call in main using its line number and then try to step into bar again:

gcc -g exmp.c -o exmp
gdb ./exmp
(gdb) b exmp.c:14
Breakpoint 1 at 0x401157: file exmp.c, line 14.
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, main () at exmp.c:14
14 bar(num());

Now, let's step into bar():

(gdb) step
num () at exmp.c:4
4 return 2;

The arguments for a function call need to be processed before the actual function call, so num() is expected to execute before bar() is called. But how do you step into the bar as was desired? You need to use the finish command and step again:

(gdb) finish
Run till exit from #0 num () at exmp.c:4
0x0000000000401161 in main () at exmp.c:14
14 bar(num());
Value returned is $1 = 2
(gdb) step
bar (i=2) at exmp.c:9
9 printf("i = %d\n", i);Tbreak

The tbreak command sets a temporary breakpoint. It's useful for situations where you don't want to set a permanent breakpoint. For example, if you want to step into a complicated function call like f(g(h()), i(j()), ...) , in such a case you need a long sequence of step/finish/step to step into f . Setting a temporary breakpoint and then using continue can help to avoid using such sequences. To demonstrate this, you need to set the breakpoint to the bar call in main as before. Then set the temporary breakpoint on bar.  As a temporary breakpoint it is automatically removed after being hit:

(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, main () at exmp.c:14
14 bar(num());
(gdb) tbreak bar
Temporary breakpoint 2 at 0x40113c: file exmp.c, line 9.

After hitting the breakpoint on the call to bar and setting a temporary breakpoint on bar, you just need to continue to end up in  bar.

(gdb) continue
Temporary breakpoint 2, bar (i=2) at exmp.c:9
9 printf("i = %d\n", i);Disable command

Alternatively, you could set a normal breakpoint on bar , continue, and then disable this second breakpoint when it's no longer needed. This way you can achieve the same results as with the tbreak with one extra command:

(gdb) b exmp.c:14
Breakpoint 1 at 0x401157: file exmp.c, line 14.
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, main () at exmp.c:14
14 bar(num());
(gdb) b bar
Breakpoint 2 at 0x40113c: file exmp.c, line 9.
(gdb) c
Breakpoint 2, bar (i=2) at exmp.c:9
9 printf("i = %d\n", i);
(gdb) disable 2

As you can see, the info breakpoints  command displays n under Enbwhich means it’s disabled but you can enable it later if it’s needed again.

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401157 in main at exmp.c:14
breakpoint already hit 1 time
2 breakpoint keep n 0x000000000040113c in bar at exmp.c:9
breakpoint already hit 1 time
(gdb) enable 2
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x000000000040116a in main at exmp.c:19
breakpoint already hit 1 time
2 breakpoint keep y 0x0000000000401158 in bar at exmp.c:14
breakpoint already hit 1 timeAdvance location

Another option you can use is an advance command. Instead of tbreak bar ; continue , you can simply do advance bar . This command continues running the program up to the given location.

The other cool thing about advance is that if the location that you try to advance to is not reached, GDB will stop after the current frame's function finishes. Thus, execution of the program is constrained:

Breakpoint 1 at 0x401157: file exmp.c, line 14.
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, main () at exmp.c:14
14 bar(num());
(gdb) advance bar
bar (i=2) at exmp.c:9
9 printf("i = %d\n", i);Skipping a function

Yet another way to step into the bar, avoiding num, is using the skip command:

(gdb) b exmp.c:14
Breakpoint 1 at 0x401157: file exmp.c, line 14.
(gdb) skip num
Function num will be skipped when stepping.
(gdb) r
Starting program: /home/ahajkova/exmp
Breakpoint 1, main () at exmp.c:14
14 bar(num());
(gdb) step
bar (i=2) at exmp.c:9
9 printf("i = %d\n", i);

To know which functions are currently skipped,  info skip is used.  The num function is marked as enabled to be skipped by y:

(gdb) info skip
Num Enb Glob File RE Function
1 y n <none> n num

If skip is not needed any more it can be disabled (and re-enabled later) or deleted altogether. You can add another skip and disable the first one and then delete them all. To disable a certain skip, its number has to be specified, if not specified, each skipis disabled. It works the same for enabling or deleting a skip:

(gdb) skip bar
(gdb) skip disable 1
(gdb) info skip
Num Enb Glob File RE Function
1 n n <none> n num
2 y n <none> n bar
(gdb) skip delete
(gdb) info skip
Not skipping any files or functions.GDB step command

Using GDB's step command is a useful tool for debugging your application. There are several ways to step into even complicated functions, so give these GDB techniques a try next time you're troubleshooting your code.

There are several ways to step into even complicated functions, so give these GDB techniques a try next time you're troubleshooting your code.

Image by:

Pixabay, testbytes, CC0

Linux What to read next GNU Debugger cheat sheet This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Our favorite markup languages for documentation

Thu, 12/08/2022 - 16:00
Our favorite markup languages for documentation Thu, 12/08/2022 - 03:00

Documentation is important for so many reasons. Readable documentation is even more so. In the world of open source software, documentation is how to use or contribute to an application. It's like the rulebook for a game

There are many different types of documentation:

  • Tutorials

  • How-to guides

  • Reference guides

  • Software architecture

  • Product manuals

We asked some of the contributors about their technical documentation workflow, which markup language they preferred, and why they might use one over the other. Here's what they had to say.


For the past several years, Markdown has been my standard language. But recently I decided to give AsciiDoc a try. The syntax is not difficult and Gedit on my Linux desktop supports it. I plan to stick with it for a while.

Alan Formy-Duval

In terms of low-syntax markup, I prefer AsciiDoc. I like it because its conversion process is consistent and predictable, with no surprise "flavor" variations to confuse things. I also love that it outputs to Docbook, which is a markup-heavy syntax that I trust for longevity and flexibility.

But the "right" choice tends to be what a project is already using. I wouldn't write in AsciiDoc if a project uses Strawberry-flavored Markdown. Well, to be fair, I might write in AsciiDoc and then convert it to Strawberry-flavored Markdown with Pandoc.

I do think there is a time and place for Markdown. I do find it more readable than AsciiDoc. Links in AsciiDoc:[Example website]

 Links in Markdown:


The Markdown syntax is intuitive, delivering the information in the same way that I think most of us parse the same data when reading HTML ("Example website…oh, that's blue text, I'll roll over it to see where it goes…it goes to").

In other words, when my audience is a human reader, I do often choose Markdown because its syntax is subtle but it's got enough of a syntax to make conversion possible, so it's still an OK storage format.

AsciiDoc, as minimal as it is, just looks scarier.

If my audience is a computer that's going to parse a file, I choose AsciiDoc.

Seth Kenlon

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources reStructuredText

I'm a big fan of docs as code and how it brings developer tools into the content workflows. It makes it easier to have efficient reviews and collaboration, especially if engineers are contributors. 

I'm also a bit of a markup connoisseur, having written whole books in AsciiDoc for O'Reilly, a lot of Markdown for various platforms, including a thousand posts on my blog. Currently, I'm a reStructuredText convert and maintain some of the tooling in that space. 

Lorna Mitchell

Obligatory mention of reStructuredText. That's my go-to these days as I do a lot of Python programming. It's also been Python's standard for documentation source and code comments for ages.

I like that it doesn't suffer quite so much from the proliferation of nonstandards that Markdown does. That said, I do use a lot of Sphinx features and extensions when working on more complex documentation.

Jeremy Stanley


I rarely use markup languages if I don't have to.

I find HTML easier to use than other markup languages though.

Rikard Grossman-Nielsen

For me, there are various ways to make documentation. It depends on where the documentation is going to be whether on a website, as part of the software package, or something downloadable.

For Scribus, the internal documentation is in HTML, since an internal browser is used to access it. On a website, you might need to use a Wiki language. For something downloadable you might create a PDF or an EPUB.

I tend to write the documentation in a plain text editor. I might use XHTML, so that I can then import these files into an EPUB maker like Sigil. And, of course, Scribus is my go-to app for making a PDF, though I would probably be importing a text file created with a text editor. Scribus has the advantage of including and precisely controlling placement of graphics.

Markdown has never caught on with me, and I've never tried AsciiDoc.

Greg Pittman

I'm writing a lot of documentation in HTML right now, so I'll put in a plug for HTML. You can use HTML to create websites, or to create documentation. Note that the two are not really the same — when you're creating websites, most designers are concerned about presentation. But when you're writing documentation, tech writers should focus on content.

When I write documentation in HTML, I stick to the tags and elements defined by HTML, and I don't worry about how it will look. In other words, I write documentation in "unstyled" HTML. I can always add a stylesheet later. So if I need to make some part of the text stronger (such as a warning) or add emphasis to a word or phrase, I might use the and tags, like this:

<p><strong>Warning: Lasers!</strong> Do <em>not</em> look into laser with remaining eye.</p>

Or to provide a short code sample within the body of a paragraph, I might write:

<p>The <code>puts</code> function prints some text to the user.</p>

To format a block of code in a document, I use .. like this:

print_array(int *array, int size)
  for (int i = 0; i < size; i++) {
  printf("array[%d] = %d\n", i, array[i]);

The great thing about HTML is you can immediately view the results with any web browser. And any documentation you write in unstyled HTML can be made prettier later by adding a stylesheet.

Jim Hall

Unexpected: LibreOffice

Back in the 80s and 90s when I worked in System V Unix, SunOS, and eventually Solaris, I used the mm macros with nroff, troff and finally groff. Read about MM using groff_mm (provided you have them installed.)

MM isn't really a markup language, but it feels like one. It is a very semantic set of troff and groff macros. It has most things markup language users would expect—headings, numbered lists, and so on.

My first Unix machine also had Writers' Workbench available on it, which was a boon for many in our organization who had to write technical reports but didn't particularly write in an "engaging manner". A few of its tools have made it to either BSD or Linux—style, diction, and look.

I also recall a standard generalized markup language (SGML) tool that came with, or perhaps we bought for, Solaris in the very early 90s. I used this for awhile, which may explain why I don't mind typing in my own HTML.

I've used Markdown a fair bit, but having said that, I should also be saying "which Markdown", because there are endless flavors and levels of features. I'm not a huge fan of Markdown because of that. I guess if I had a lot of Markdown to do I would probably try to gravitate toward some implementation of CommonMark because it actually has a formal definition. For example, Pandoc supports CommonMark (as well as several others).

I started using AsciiDoc, which I much prefer to Markdown as it avoids the "which version are you using" conversation and provides many useful things. What has slowed me down in the past with respect to AsciiDoc is that for some time it seemed to require installing Asciidoctor—a Ruby toolchain which I was not anxious to install. But these days there are more implementations at least in my Linux distro. Curiously, Pandoc emits AsciiDoc but does not read it.

Those of you laughing at me for not wanting a Ruby toolchain for AsciiDoc but being satisfied with a Haskell toolchain for Pandoc… I hear you.

I blush to admit that I mostly use LibreOffice these days.

Chris Hermansen

Document now!

Documentation can be achieved through many different avenues, as the writers here have demonstrated. It's important to document how to use your code, especially in open source. This ensures that other people can use and contribute to your code properly. It's also wise to tell future users what your code is providing. 

Documentation is critical to open source software projects. We asked our contributors what their favorite markup language is for documentation.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Documentation community What to read next Cheat sheet: Markdown This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 4576 points United States

Alan has 20 years of IT experience, mostly in the Government and Financial sectors. He started as a Value Added Reseller before moving into Systems Engineering. Alan's background is in high-availability clustered apps. He wrote the 'Users and Groups' and 'Apache and the Web Stack' chapters in the Oracle Press/McGraw Hill 'Oracle Solaris 11 System Administration' book. He earned his Master of Science in Information Systems from George Mason University. Alan is a long-time proponent of Open Source Software.

| Follow AlanFormy_Duval User Attributes Correspondent Open Source Sensei People's Choice Award Gamer Linux SysAdmin Geek Java Apache DevOps Author Comment Gardener Correspondent Contributor Club 28371 points New Zealand (South Island)

Seth Kenlon is a UNIX geek, free culture advocate, independent multimedia artist, and D&D nerd. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project Slackermedia.

User Attributes Team Open Source Super Star Moderator's Choice Award 2011 100+ Contributions Club Best Interview Award 2017 Author Columnist Contributor Club 77 points UK

Lorna is based in Yorkshire, UK; she is a polyglot programmer as well as a published author and experienced conference speaker. She brings her technical expertise on a range of topics to audiences all over the world with her writing and speaking engagements, always delivered with a very practical slant. Lorna works in Developer Relations at Aiven and in her spare time she blogs at

| Follow lornajane Open Enthusiast Author 217 points Kill Devil Hills, NC, USA

A long-time computer hobbyist and technology generalist, Jeremy Stanley has worked as a Unix and GNU/Linux sysadmin for nearly three decades focusing on information security, Internet services, and data center automation. He’s a root administrator for the OpenDev Collaboratory, a maintainer of the Zuul project, and serves on the OpenStack vulnerability management team. Living on a small island in the Atlantic, in his spare time he writes free software, hacks on open hardware projects and embedded platforms, restores old video game systems, and enjoys articles on math theory and cosmology.

Open Minded People's Choice Award Author 6961 points Vancouver, Canada

Seldom without a computer of some sort since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005, a full-time Solaris and SunOS user from 1986 through 2005, and UNIX System V user before that.

On the technical side of things, I have spent a great deal of my career as a consultant, doing data analysis and visualization; especially spatial data analysis. I have a substantial amount of related programming experience, using C, awk, Java, Python, PostgreSQL, PostGIS and lately Groovy. I'm looking at Julia with great interest. I have also built a few desktop and web-based applications, primarily in Java and lately in Grails with lots of JavaScript on the front end and PostgreSQL as my database of choice.

Aside from that, I spend a considerable amount of time writing proposals, technical reports and - of course - stuff on

User Attributes Correspondent Open Sourcerer People's Choice Award 100+ Contributions Club Emerging Contributor Award 2016 Author Comment Gardener Correspondent Columnist Contributor Club 152 points Sweden

Hello my name is Richard and I’m an intermediate Linux user diagnosed with ADHD and

On a daily basis I use Linux for java programming, productivity and gaming.
I’m also a trained teacher, male, 39yrs of age, living in Sweden. I first started using Linux in late 90s. One of the first distros I installed was Redhat due to it's ease of use.
Today I mostly use Ubuntu and Manjaro.

I'm among other things interested in how Linux and open source software can be made more accessible to people with conditions like ADHD, Asperger's and Dyslexia.

I use accessibility software due to being diagnosed with Asperger's and ADHD.
I mostly use speech synthesis to find spelling errors and calendar software with accommodations.

I can be reached at:

Open Minded Author Contributor Club 4538 points Louisville, KY

Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.

Open Source Sensei Emerging Contributor Award 2017 Awesome Article Award 2019 Python Author Contributor Club 5069 points Minnesota

Jim Hall is an open source software advocate and developer, best known for usability testing in GNOME and as the founder + project coordinator of FreeDOS. At work, Jim is CEO of Hallmentum, an IT executive consulting company that provides hands-on IT Leadership training, workshops, and coaching.

| Connect jimfhall User Attributes Correspondent Open Sourcerer People's Choice Award People's Choice Award 2018 Author Correspondent Contributor Club Register or Login to post a comment.

Manage your file system from the Linux terminal

Thu, 12/08/2022 - 16:00
Manage your file system from the Linux terminal Seth Kenlon Thu, 12/08/2022 - 03:00

I tend to enjoy lightweight applications. They're good for low spec computers, for remote shells, for the impatient user (OK, I admit, that's me), and for the systems we scrap together to fight the inevitable zombie apocalypse. In my search for a perfect blend of a lightweight application with all the modern conveniences we've learned from experience, I stumbled across a file manager called nnn. The nnn file manager exists in a terminal only, but it feels like a modern keyboard-driven application with intuitive actions and easy navigation.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Install nnn

On Linux, you may find nnn in your Linux distribution's software repository. For instance, on Debian:

$ sudo apt install nnn

If your repository doesn't have nnn available, you can download a package for your distribution from OBS or from the project Git repository.

On macOS, use Homebrew or MacPort.

Using nnn

Launch nnn from a terminal:

$ nnn

Your terminal is now the nnn interface, and by default it lists the contents of your current directory:

1 2 3 4 ~


4/8 2022-12-01 15:54 drwxr-xr-x 6B

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

At the top of the nnn interface are tabs (called a "context" in nnn terminology), numbered one to four.

At the bottom of the nnn interface, there are ownership and permission details about your current selection.

Use either the Up and Down arrow keys or the k and j keys (as in Vim) to change your selection. Use the Right arrow key, Return, or the l key to enter a directory or to open a file. Use the Left arrow key or h to back out of a directory.

That's it for navigation. It's easier than any graphical file manager because there aren't any widgets that get in the way. There's no need to Tab over buttons, you just use the arrow keys or the QWERTY home row.

Open a file

One of the reasons you use a file manager is to find a file and then open it. Your desktop already has default applications set, and nnn inherits this knowledge, so press Return or Right arrow to open a file in its default application.

Should you need to open a file in something other than its default application, press = instead, and then type the name of the application in the prompt at the bottom of the nnn interface.

Copy a file

To copy a file or any number of files, you must first select a file to copy, then navigate to its intended destination, and finally invoke the copy command. Thanks to nnn's context control (those are the numbers at the top of the screen, and you can think of them as tabs in a web browser), this is a quick process.

  1. First, select the file you want to copy and press Spacebar to select the file. It's marked with a plus sign (+) to indicate its selected state.

  2. Press 2 to change to a new context.

  3. Navigate to the target directory and press p to copy.

Move a file

Moving files is the same process as copying a file, but the keyboard shortcut for the action is v.

Selecting files

There are a few ways to mark selections in nnn. The first is manual selection. You find a file you want to select, and then press Spacebar to mark it as selected. Press Spacebar again to deselect it.

One selection doesn't cancel another, so you can select several files manually, but that can become tedious. Another way to select many files at once is to "mark in " and "mark out". To mark a selection, press m on the first file you want to select, and then use your arrow keys to move to the last file you want to select. Press m again to close the selection:

1 2 3 4 ~


6/8 [ +6 ] 2022-12-01 15:54 drwxr-xr-x 6B

Finally, the third way to select files is to press a to select all. Use A to invert the selection (in this case, to select none.)

Creating an archive

To create an archive of a file or a selection of files, press z. At the bottom of the nnn interface, you're prompted to choose between your current item of your selection. Then you're prompted for a file name. Luckily, nnn is a smart application and derives its file type from the name you provide. If you name your archive example.tar.xz then nnn creates a TAR archive with lzma compression, but if you name it then it creates a ZIP file.

You can verify the file type yourself by pressing the f key with your new archive selected:

  File: /home/tux/Downloads/
  Size: 184707          Blocks: 368        IO Block: 4096   regular file
Device: fd00h/64768d    Inode: 17842380    Links: 1
Access: (0664/-rw-rw-r--)  Uid: ( 1002/     tux)   Gid: ( 1002/     tux)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2022-09-20 15:12:09.770187616 +1200
Modify: 2022-09-20 15:12:09.775187704 +1200
Change: 2022-09-20 15:12:09.775187704 +1200
 Birth: 2022-09-20 15:12:09.770187616 +1200
Zip archive data, at least v2.0 to extract
application/zip; charset=binaryCancel an action

When you find yourself backed into a corner and need to press a panic button, use the Esc key. (This is likely to be the single most confusing keyboard shortcut for a longtime terminal user who's accustomed to Ctrl+C.)

Never close nnn

To quit nnn, press Q at any time.

It's a very capable file manager, with functions for symlinks, FIFO, bookmarks, batch renaming, and more. For a full list of what nnn can do, press the ? key.

The most clever feature is the shell function. Press ! to open a shell over the nnn interface. You'll forget nnn is there, until you type exit and you find yourself back in the nnn interface. It's that easy to leave nnn open all the time, so you can always have quick access to the fastest lightweight file management you're likely to experience.

The nnn file manager on Linux exists in a terminal only, but it feels like a modern keyboard-driven application with intuitive actions and easy navigation.

Image by:

iradaturrahmat via Pixabay, CC0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

16 reasons DDEV will be your new favorite development environment

Wed, 12/07/2022 - 16:00
16 reasons DDEV will be your new favorite development environment Randy Fay Wed, 12/07/2022 - 03:00

In 2022, you have a wide variety of local development environments to choose from, whether you're a designer, developer, tester, or any kind of open source contributor. Because most of the tools and platforms contributors use happen to run on many different operating systems, you probably even have the choice of constructing your own environment. I'm the maintainer of DDEV, and here are 15 reasons I think you'll like it for your development environment.

1. Cross-platform

DDEV supports and tests, and has a fully automated test suite for Linux (amd64 and Arm), WSL2, Windows, and macOS (M1 and amd64.)

Some tools require you to use one exact version of Docker (and they may even take the liberty of installing it themselves), DDEV works with versions of Docker that are a couple of years old, and keeps up with the latest versions, as well. Alternatively, you can use Colima or Docker installed inside WSL2.

DDEV’s binaries are signed and notarized on macOS and Windows, so you never have to sneak around scary operating system warnings when installing and using DDEV.

2. Performance

The DDEV team believes that DDEV on macOS and Windows has the best performance you can get on any local development, both in terms of starting DDEV (10 to 20 seconds) and in terms of webserving. With no setup required at all, the Mutagen feature speeds up webserving by a factor of 10, at least. And of course on Linux (including WSL2) it's truly superb.

3. Settings file management

DDEV is happy to get you started quickly and easily, and even manage your settings files. You can use your own repository or follow one of the quickstart guides to create something new and you'll have a project going in no time. You can also turn off settings file management to fine-tune your team's approach when you need more customization.

DDEV's configuration files aren't used when they're not being used in a DDEV context, so your project won't accidentally have DDEV settings if you mistakenly deploy them to production. If you have the same project setup for Lando and DDEV, then the DDEV settings won't break Lando.

4. Trusted HTTPS

DDEV uses mkcert to allow you to conduct all your work using locally trusted HTTPS, just like it will work in the real world. You don't have to click around scary browser warnings to view your project in development.

5. Database snapshots

DDEV has the ddev snapshot feature, allowing you to quickly capture the state of your database and then quickly restore to a different point. You can name snapshots for different branches of your project. It's far faster than traditional export and import.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications 6. Simple single-binary installation without dependencies

DDEV is written in Go. Because Go is a fairly new language, this can be a bit of a disadvantage in terms of community involvement, but it's a huge advantage for cross-platform support. Go does cross-platform builds with ease, and the resulting self-contained binary has no dependencies at all (aside from Docker.) There are no libraries to install, no DLLs to maintain. And responsiveness to commands is excellent!

7. Xdebug step-debugging

Lots of people have their first experience with a real step-debugging environment for the first time with DDEV because it's really, really easy. Thanks to PHPStorm, there's no setup at all. With VSCode or Codium, there's about 2 minutes of setup. There's no need for inserting print statements into code anymore!

8. Explicit support for your CMS

DDEV has built-in support for many popular content management systems (CMS) and platforms. "Explicit support" means that there's setting management, and an NGINX configuration customized for the specific platform you're using. Here's a partial list of what's supported:

  • Drupal
  • Backdrop
  • WordPress
  • TYPO3
  • Magento
  • Laravel
  • Shopware
9. Integration and add-ons

While DDEV provides explicit support with optional settings management for your CMS of choice, many developers use other platforms, including Symfony, Moodle, Mautic, and so on. DDEV has explicit support for NodeJS, both for processing and as daemons.

DDEV also features a library of supported, maintained, and tested add-ons for Redis, Solr, Memcached, Elasticsearch, Mongo, Varnish, and more.

10. Gitpod

Your local development environment doesn't even have to be local anymore. DDEV has full support for use in Gitpod so you can move your development into the cloud.

11. No vendor lock-in

There is absolutely no vendor lock-in in DDEV. The idea behind the DDEV platform is that DDEV can be plugged into a dev-to-deploy workflow as pieces of a puzzle that work for you. Mix and match! DDEV is an open source community project that works great with any hosting service you can use.

12. Respect for your host computer

DDEV doesn't assume you use your computer (or containers) only for DDEV.

Too many local dev tools happily reconfigure your host computer without your full involvement. More than one of them edit your /etc/exports file, with no way for you to opt out. A couple of them actually overwrite your Docker installation with a different version at install time. DDEV tries to ensure that in the unlikely situation that anything needs to be changed on your computer, you're the one doing it, and you have options.

For example, HTTPS support requires running mkcert -install one time. NFS support requires a bit of additional setup. Because nearly everything is run in a container, there's very little that needs to be done on the host computer in the first place.

13. Community

The DDEV community has been phenomenal through the years, contributing ideas, code, and shared support. There are open collections of DDEV services, tools, snippets, approaches, as well as blogs and presentations and more from users all around the world.

The DDEV Advisory Group provides oversight, direction, and feedback for the project. Anyone is welcome to join.

14. Open source

DDEV is a small cog in the huge open source ecosystem. It couldn't even exist without the hundreds or thousands of projects that make up the Linux containers that run it, and of course, PHP itself is a fundamental upstream project. We love to contribute upstream and downstream to projects like:

  • Docker: DDEV is involved with the Docker project, because DDEV users are always pushing the limits. We participate heavily in Docker issue queues.
  • Mutagen: When you edit code in containers, there's a lot of synchronization between your local host and the container environment that needs to happen. Mutagen helps bridge that gap for DDEV users.
  • mkcert: The mkcert tool allows DDEV to provide trusted HTTPS in your local development environment. We've benefited enormously from it, and have contributed back tests and bug fixes.
  • Xdebug: DDEV is great with Xdebug, and of course, we hear right away when there are problems. We report our findings back to the Xdebug issue queue.
  • PHP packages: The Debian PHP packages (5.6 all the way through 8.2, at the time of writing) we use come from Because the DDEV community is an early consumer of those packages, we're often in that issue queue too.
15. DDEV Keeps Up

DDEV is always keeping up with the dependencieis you need. For example, at this writing, neither PHP 8.2.0 nor Drupal 10 have yet been released, but both have been supported in DDEV for months.

16. Your own reason

I'd love to hear what makes DDEV your favorite, and the DDEV team is always listening to hear what you want in the future. Of course, we also want to hear when things don't work the way you want or expect. Visit our Git repository to contribute!

Note: This is an updated version of a blog post that originally appeared on

What's so different about DDEV? It's a container-based local development environment. Here are a few reasons you should give it a try.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Containers Web development Programming Go programming language Linux Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why I use the Enlightenment file manager on Linux

Wed, 12/07/2022 - 16:00
Why I use the Enlightenment file manager on Linux Seth Kenlon Wed, 12/07/2022 - 03:00

Computers are like filing cabinets, full of virtual folders and files waiting to be referenced, cross-referenced, edited, updated, saved, copied, moved, renamed, and organized. In this series, I'm taking a look at the Enlightenment file manager for your Linux system.

The Enlightenment desktop is designed to be a modern implementation of what's considered a traditional UNIX desktop. There are certain elements that are considered to be characteristic of graphical UNIX, most of which were defined in the by early desktops like CDE or twm. Enlightenment implements things like a dock, an on-demand global contextual menu, flexible focus, virtual workspaces, but with an almost hyper-modern flair. Enlightenment is able to combine these elements with effects and animations because it's also its own compositor, and the EFL libraries that the desktop uses are specific to Enlightenment and maintained by the Enlightenment team. That's a long way of confessing that in this entry in my file manager series, I'm looking at a file manager that's mostly inextricable from the desktop it supports. If you want to try Enlightenment's file manager, you have to try Enlightenment. Luckily, it's a pleasant experience, and a fun diversion from the usual desktops.

Install Enlightenment on Linux

You can probably install Enlightenment from your distribution's repository. For example, on Fedora:

$ sudo dnf install enlightenment

On Debian and similar:

$ sudo apt install enlightenmentFile manager

When you first log in to Enlightenment, you must make some choices about configuration. After setting your language and visual theme, you can open a file manager window by either double-clicking on the Home icon on the desktop, or by clicking on the desktop and choosing Navigate.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Customizing the panel

The left panel of the file manager displays common places in your file system. Not everyone considers the same places common, though, so you're free to change the bookmarks in the panel to suit your needs.

Start by removing the items you don't need. For instance, maybe you don't need an icon for your Desktop in your side panel. To remove it, right-click on it and select Delete. You're asked for confirmation, and it's safe to accept. You're not deleting your actual desktop or the items on it, you're just removing the Desktop item from the side panel. You can remove any of the items from the left panel in this way.

Next, add directories you frequent. You can add items by dragging and dropping icons from the right panel into the left. Once there, they're considered bookmarks for Enlightenment's file manager. These items don't carry over into other file managers or file choosers. This is a bookmarks panel specific to the Enlightenment file manager.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Customizing the view

A file manager's main purpose is to help you manage files. Part of managing files is getting a good look at what you have, and there are three different views Enlightenment offers. To access the different views, right-click in an empty space in the file manager and choose View Mode.

  • Custom Icons: Place icons anywhere in the file manager window you please.

  • Grid: Sort icons, aligned to a grid.

  • List: Sort small icons as an itemized list.

In addition to altering your view of the icons representing your files and folders, you can control how they're sorted. The default is to alphabetize directories first, and then files. You can right-click in an empty space in the file manager and select Sorting to choose between other options:

  • Size: This is particularly useful when you're trying to find files that are occupying too much space on your hard drive.

  • File extension: Group files together by file type!

  • Modification time: Make recent files easy find.

Grouping files together by file extension is the real epiphany of the Enlightenment file manager. In most other file managers, the closest you can get to this feature is the ability to filter files by manually typing in the extension you're interested in. But with this feature, your files "cluster" together by a sort of genealogical affinity. It makes files really easy to find without giving any particular preference to any one group of file types. You just locate the group of files you're interested in, and then the single file you want to work on.

Keyboard navigation

The Enlightenment file manager has good keyboard support. As long as the file manager is in focus, you can press any Arrow key to move between items in the right panel. Press Return to enter a directory or to open a file.

You can use Alt and the Left Arrow key to move back to the previously visited directory. Use Alt and the Up Arrow key to move to your current directory's parent.

The Enlightenment experience

Enlightenment is a fun and beautiful desktop, and its default file manager does everything you need a file manager to do. It's got the essential customization options, good support for keyboard navigation, and it fits the rest of the desktop perfectly. If you're in the mood for something different, then give Enlightenment a try.

The Enlightenment file manager for Linux has essential customization options, good support for keyboard navigation, and it fits the rest of the desktop perfectly.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A data scientist's guide to open source community analysis

Tue, 12/06/2022 - 16:00
A data scientist's guide to open source community analysis Cali Dolfi Tue, 12/06/2022 - 03:00

In the golden age of data analysis, open source communities are not exempt from the frenzy around getting some big, fancy numbers onto presentation slides. Such information can bring even more value if you master the art of generating a well-analyzed question with proper execution.

You might expect me, a data scientist, to tell you that data analysis and automation will inform your community decisions. It's actually the opposite. Use data analysis to build on your existing open source community knowledge, incorporate others, and uncover potential biases and perspectives not considered. You might be an expert at implementing community events, while your colleague is a wiz at all things code. As each of you develops visualizations within the context of your own knowledge, you both can benefit from that information.

Let's have a moment of realness. Everyone has a thousand and one things to keep up with, and it feels like there is never enough time in a day to do so. If getting an answer about your community takes hours, you won't do it regularly (or ever). Spending the time to create a fully developed visualization makes it feasible to keep up with different aspects of the communities you care about.

With the ever-increasing pressure of being "data-driven," the treasure trove of information around open source communities can be a blessing and a curse. Using the methodology below, I will show you how to pick the needle out of the data haystack.

What is your perspective?

When thinking about a metric, one of the first things you must consider is the perspective you want to provide. The following are a few concepts you could establish.

Informative vs. influencing action: Is there an area of your community that is not understood? Are you taking that first step in getting there? Are you trying to decide on a particular direction? Are you measuring an existing initiative?

Exposing areas of improvement vs. highlighting strengths: There are times when you are trying to hype up your community and show how great it is, especially when trying to demonstrate business impact or advocate for your project. When it comes to informing yourself and the community, you can often get the most value from your metrics by identifying shortcomings. Highlighting strengths is not a bad practice, but there is a time and place. Don't use metrics as a cheerleader inside your community to tell you how great everyone is; instead, share that with outsiders for recognition or promotion.

Community and business impact: Numbers and data are the languages of many businesses. That can make it incredibly difficult to advocate for your community and truly show its value. Data can be a way to speak in their language and show what they want to see to get the rest of your messaging across. Another perspective is the impact on open source overall. How does your community impact others and the ecosystem?

These are not always either/or perspectives. Proper framing will help in creating a more deliberate metric.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

People often describe some version of this workflow when talking about general data science or machine learning work. I will focus on the first step, codifying problems and metrics, and briefly mention the second. From a data science perspective, this presentation can be considered a case study of this step. This step is sometimes overlooked, but your analysis's actual value starts here. You don't just wake up one day and know exactly what to look at. Begin with understanding what you want to know and what data you have to get you to the true goal of thoughtful execution of data analysis.

More on data science What is data science? What is Python? How to become a data scientist Data scientist: A day in the life Use JupyterLab in the Red Hat OpenShift Data Science sandbox Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint MariaDB and MySQL cheat sheet Latest data science articles 3 data analysis use cases in open source

Here are three different scenarios you might run into in your open source data analysis journey.

Scenario 1: Current data analysis

Suppose you are starting to go down the analysis path, and you already know what you're looking into is generally useful to you/your community. How can you improve? The idea here is to build off "traditional" open source community analysis. Suppose your data indicates you have had 120 total contributors over the project's lifetime. That's a value you can put on a slide, but you can't make decisions from it. Start taking incremental steps from just having a number to having insights. For example, you can break out the sample of total contributors into active versus drifting contributors (contributors who have not contributed in a set amount of time) from the same data.

Scenario 2: Community campaign impact measurement Image by:

(Cali Dolfi, CC BY-SA 4.0)

Consider meetups, conferences, or any community outreach initiative. How do you view your impacts and goals? These two steps actually feed into each other. Once you establish the campaign goals, determine what can be measured to detect the effect. That information helps set the campaign's goals. It's easy to fall into the trap of being vague rather than concrete with plans when a campaign begins.

Scenario 3: Form new analysis areas to impact Image by:

(Cali Dolfi, CC BY-SA 4.0)

This situation occurs when you work from scratch in data analysis. The previous examples are different parts of this workflow. The workflow is a living cycle; you can always make improvements or extensions. From this concept, the following are the necessary steps you should work through. Later in this article, there will be three different examples of how this approach works in the real world.

Step 1: Break down focus areas and perspectives

First, consider a magic eight ball—the toy you can ask anything, shake, and get an answer. Think about your analysis area. If you could get any answer immediately, what would it be?

Next, think about the data. From your magic eight-ball question, what data sources could have anything to do with the question or focus area?

What questions could be answered in the data context to move you closer to your proposed magic eight-ball question? It's important to note that you must consider the assumptions made if you try to bring all the data together.

Step 2: Convert a question to a metric

Here is the process for each sub-question from the first step:

  • Select the specific data points needed.
  • Determine visualization to get the goal analysis.
  • Hypothesize the impacts of this information.

Next, bring in the community to provide feedback and trigger an iterative development process. The collaborative portion of this can be where the real magic happens. The best ideas often come when bringing a concept to someone that inspires them in a way you or they would not have imagined.

Step 3: Analysis in action

This step is where you start working through the implications of the metric or visualization you have created.

The first thing to consider is if this metric follows what is currently known about the community.

  • If yes: Are there assumptions made that catered to the results?
  • If no: You want to investigate further whether this is potentially a data or calculation issue or if it is just a previously misunderstood part of the community.

Once you have determined if your analysis is stable enough to make inferences on, you can start to implement community initiatives on the information. As you are taking in the analysis to determine the next best step, you should identify specific ways to measure the initiative's success.

Now, observe these community initiatives informed by your metric. Determine if the impact is observable by your priorly established measurement of success. If not, consider the following:

  • Are you measuring the right thing?
  • Does the initiative strategy need to change?
Example analysis area: New contributors What is my magic eight-ball question?
  • Do people have an experience that establishes them as consistent contributors?
What data do I have that goes into the analysis area and magic eight-ball question?
  • What contributor activity exists for repos, including timestamps?

Now that you have the information and a magic eight-ball question, break the analysis down into subparts and follow each of them to the end. This idea correlates with steps 2 and 3 above.

Sub-question 1: "How are people coming into this project?"

This question aims to see what new contributors are doing first.

Data: GitHub data on first contributions over time (issues, PR, comments, etc.).

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Visualization: Bar chart with first-time contributions broken down by quarter.

Potential extension: After you talk with other community members, further examination breaks the information down by quarter and whether the contributor was a repeat or drive-by. You can see what people are doing when they come in and if that tells you anything about whether they will stick around.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Potential actions informed by this information:

  • Does the current documentation support contributors for the most common initial contribution? Could you support those contributors better, and would that help more of them stay?
  • Is there a contribution area that is not common overall but is a good sign for a repeat contributor? Perhaps PR is a common area for repeat contributors, but most people don't work in that area.

Action items:

  • Label "good first issues" consistently and link these issues to the contribution docs.
  • Add a PR buddy to these.

Sub-question 2: "Is our code base really dependent on drive-by contributors?"

Data: Contribution data from GitHub.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Visualization: "Total contributions: Broken down by contributions by drive-by and repeat contributor."

Potential actions informed by this information:

  • Does this ratio achieve the program's goals? Is a lot of the work done by drive-by contributors? Is this an underutilized resource, and is the project not doing its part to bring them in?
Analysis: Lessons learned

Number and data analysis are not "facts." They can say anything, and your internal skeptic should be very active when working with data. The iterative process is what will bring value. You don't want your analysis to be a "yes man." Take time to take a step back and evaluate the assumptions you've made.

If a metric just points you in a direction to investigate, that is a huge win. You can't look at or think of everything. Rabbit holes can be a good thing, and conversation starters can bring you to a new place.

Sometimes exactly what you want to measure is not there, but you might be able to get valuable details. You can't assume that you have all the puzzle pieces to get an exact answer to your original question. If you start to force an answer or solution, you can take yourself down a dangerous path led by assumptions. Leaving room for the direction or goal of analysis to change can lead you to a better place or insight than your original idea.

Data is a tool. It is not the answer, but it can bring together insights and information that would not have been accessible otherwise. The methodology of breaking down what you want to know into manageable chunks and building on that is the most important part.

Open source data analysis is a great example of the care you must take with all data science:

  • The nuance of the topic area is the most important.
  • The process of working through "what to ask/answer" is often overlooked.
  • Knowing what to ask can be the hardest part, and when you come up with something insightful and innovative, it's much more than whatever tool you choose.

If you are a community member with no data science experience looking at where to start, I hope this information shows you how important and valuable you can be to this process. You bring the insights and perspectives of the community. If you are a data scientist or someone implementing the metrics or visualizations, you have to listen to the voices around you, even if you are also an active community member. More information on data science is listed at the end of this article.

Wrap up

Use the above example as a framework for establishing data analysis of your own open source project. There are many questions to ask of your results, and knowing both the questions and their answers can lead your project in an exciting and fruitful direction.

More on data science

Consider the following sources for more information on data science and the technologies that provide it with data:

Consider this framework for establishing data analysis of your own open source project.

Image by:

Data Science Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A 10-minute guide to the Linux ABI

Tue, 12/06/2022 - 16:00
A 10-minute guide to the Linux ABI Alison Chaiken Tue, 12/06/2022 - 03:00

Many Linux enthusiasts are familiar with Linus Torvalds' famous admonition, "we don't break user space," but perhaps not everyone who recognizes the phrase is certain about what it means.

The "#1 rule" reminds developers about the stability of the applications' binary interface via which applications communicate with and configure the kernel. What follows is intended to familiarize readers with the concept of an ABI, describe why ABI stability matters, and discuss precisely what is included in Linux's stable ABI. The ongoing growth and evolution of Linux necessitate changes to the ABI, some of which have been controversial.

What is an ABI?

ABI stands for Applications Binary Interface. One way to understand the concept of an ABI is to consider what it is not. Applications Programming Interfaces (APIs) are more familiar to many developers. Generally, the headers and documentation of libraries are considered to be their API, as are standards documents like those for HTML5, for example. Programs that call into libraries or exchange string-formatted data must comply with the conventions described in the API or expect unwanted results.

ABIs are similar to APIs in that they govern the interpretation of commands and exchange of binary data. For C programs, the ABI generally comprises the return types and parameter lists of functions, the layout of structs, and the meaning, ordering, and range of enumerated types. The Linux kernel remains, as of 2022, almost entirely a C program, so it must adhere to these specifications.

"The kernel syscall interface" is described by Section 2 of the Linux man pages and includes the C versions of familiar functions like "mount" and "sync" that are callable from middleware applications. The binary layout of these functions is the first major part of Linux's ABI. In answer to the question, "What is in Linux's stable ABI?" many users and developers will respond with "the contents of sysfs (/sys) and procfs (/proc)." In fact, the official Linux ABI documentation concentrates mostly on these virtual filesystems.

The preceding text focuses on how the Linux ABI is exercised by programs but fails to capture the equally important human aspect. As the figure below illustrates, the functionality of the ABI requires a joint, ongoing effort by the kernel community, C compilers (such as GCC or clang), the developers who create the userspace C library (most commonly glibc) that implements system calls, and binary applications, which much be laid out in accordance with the Executable and Linking Format (ELF).

Image by:

(Alison Chaiken, CC BY-SA 4.0)

Why do we care about the ABI?

The Linux ABI stability guarantee that comes from Torvalds himself enables Linux distros and individual users to update the kernel independently of the operating system.

If Linux did not have a stable ABI, then every time the kernel needed patching to address a security problem, a large part of the operating system, if not the entirety, would need to be reinstalled. Obviously, the stability of the binary interface is a major contributing factor to Linux's usability and wide adoption.

Image by:

(Alison Chaiken, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

As the second figure illustrates, both the kernel (in linux-libc-dev) and Glibc (in libc6-dev) provide bitmasks that define file permissions. Obviously the two sets of definitions must agree! The apt package manager identifies which software project provided each file. The potentially unstable part of Glibc's ABI is found in the bits/ directory.

For the most part, the Linux ABI stability guarantee works just fine. In keeping with Conway's Law, vexing technical issues that arise in the course of development most frequently occur due to misunderstandings or disagreements between different software development communities that contribute to Linux. The interface between communities is easy to envision via Linux package-manager metadata, as shown in the image above.

Y2038: An example of an ABI break

The Linux ABI is best understood by considering the example of the ongoing, slow-motion "Y2038" ABI break. In January 2038, 32-bit time counters will roll over to all zeroes, just like the odometer of an older vehicle. January 2038 sounds far away, but assuredly many IoT devices sold in 2022 will still be operational. Mundane products like smart electrical meters and smart parking systems installed this year may or may not have 32-bit processor architectures and may or may not support software updates.

The Linux kernel has already moved to a 64-bit time_t opaque data type internally to represent later timepoints. The implication is that system calls like time() have already changed their function signature on 64-bit systems. The arduousness of these efforts is on ready display in kernel headers like time_types.h, which includes new and "_old" versions of data structures.

Image by:

(marneejill, CC BY-SA 2.0)

The Glibc project also supports 64-bit time, so yay, we're done, right? Unfortunately, no, as a discussion on the Debian mailing list makes clear. Distros are faced with the unenviable choice of either providing two versions of all binary packages for 32-bit systems or two versions of installation media. In the latter case, users of 32-bit time will have to recompile their applications and reinstall. As always, proprietary applications will be a real headache.

What precisely is in the Linux stable ABI anyway?

Understanding the stable ABI is a bit subtle. Consider that, while most of sysfs is stable ABI, the debug interfaces are guaranteed to be unstable since they expose kernel internals to userspace. In general, Linus Torvalds has pronounced that by "don't break userspace," he means to protect ordinary users who "just want it to work" rather than system programmers and kernel engineers, who should be able to read the kernel documentation and source code to figure out what has changed between releases. The distinction is illustrated in the figure below.

Image by:

(Alison Chaiken, CC BY-SA 4.0)

Ordinary users are unlikely to interact with unstable parts of the Linux ABI, but system programmers may do so inadvertently. All of sysfs (/sys) and procfs (/proc) are guaranteed stable except for /sys/kernel/debug.

But what about other binary interfaces that are userspace-visible, including miscellaneous ABI bits like device files in /dev, the kernel log file (readable with the dmesg command), filesystem metadata, or "bootargs" provided on the kernel "command line" that are visible in a bootloader like GRUB or u-boot? Naturally, "it depends."

Mounting old filesystems

Next to observing a Linux system hang during the boot sequence, having a filesystem fail to mount is the greatest disappointment. If the filesystem resides on an SSD belonging to a paying customer, the matter is grave indeed. Surely a Linux filesystem that mounts with an old kernel version will still mount when the kernel is upgraded, right? Actually, "it depends."

In 2020 an aggrieved Linux developer complained on the kernel's mailing list:

The kernel already accepted this as a valid mountable filesystem format, without a single error or warning of any kind, and has done so stably for years. . . . I was generally under the impression that mounting existing root filesystems fell under the scope of the kernel<->userspace or kernel<->existing-system boundary, as defined by what the kernel accepts and existing userspace has used successfully, and that upgrading the kernel should work with existing userspace and systems.

But there was a catch: The filesystems that failed to mount were created with a proprietary tool that relied on a flag that was defined but not used by the kernel. The flag did not appear in Linux's API header files or procfs/sysfs but was instead an implementation detail. Therefore, interpreting the flag in userspace code meant relying on "undefined behavior," a phrase that will make software developers almost universally shudder. When the kernel community improved its internal testing and started making new consistency checks, the "man 2 mount" system call suddenly began rejecting filesystems with the proprietary format. Since the format creator was decidedly a software developer, he got little sympathy from kernel filesystem maintainers.

Image by:

(Kernel developers working in-tree are protected from ABI changes. Alison Chaiken, CC BY-SA 4.0)

Threading the kernel dmesg log

Is the format of files in /dev guaranteed stable or not? The command dmesg reads from the file /dev/kmsg. In 2018, a developer made output to dmesg threaded, enabling the kernel "to print a series of printk() messages to consoles without being disturbed by concurrent printk() from interrupts and/or other threads." Sounds excellent! Threading was made possible by adding a thread ID to each line of the /dev/kmsg output. Readers following closely will realize that the addition changed the ABI of /dev/kmsg, meaning that applications that parse that file needed to change too. Since many distros didn't compile their kernels with the new feature enabled, most users of /bin/dmesg won't have noticed, but the change broke the GDB debugger's ability to read the kernel log.

Assuredly, astute readers will think users of GDB are out of luck because debuggers are developer tools. Actually, no, since the code that needed to be updated to support the new /dev/kmsg format was "in-tree," meaning part of the kernel's own Git source repository. The failure of programs within a single repo to work together is just an out-and-out bug for any sane project, and a patch that made GDB work with threaded /dev/kmsg was merged.

What about BPF programs?

BPF is a powerful tool to monitor and even configure the running kernel dynamically. BPF's original purpose was to support on-the-fly network configuration by allowing sysadmins to modify packet filters from the command line instantly. Alexei Starovoitov and others greatly extended BPF, giving it the power to trace arbitrary kernel functions. Tracing is clearly the domain of developers rather than ordinary users, so it is certainly not subject to any ABI guarantee (although the bpf() system call has the same stability promise as any other). On the other hand, BPF programs that create new functionality present the possibility of "replacing kernel modules as the de-facto means of extending the kernel." Kernel modules make devices, filesystems, crypto, networks, and the like work, and therefore clearly are a facility on which the "just want it to work" user relies. The problem arises that BFP programs have not traditionally been "in-tree" as most open-source kernel modules are. A proposal in spring 2022 to provide support to the vast array of human interface devices (HIDs) like mice and keyboards via tiny BPF programs rather than patches to device drivers brought the issue into sharp focus.

A rather heated discussion followed, but the issue was apparently settled by Torvalds' comments at Open Source Summit:

He specified if you break 'real user space tools, that normal (non-kernel developers) users use,' then you need to fix it, regardless of whether it is using eBPF or not.

A consensus appears to be forming that developers who expect their BPF programs to withstand kernel updates will need to submit them to an as-yet unspecified place in the kernel source repository. Stay tuned to find out what policy the kernel community adopts regarding BPF and ABI stability.


The kernel ABI stability guarantee applies to procfs, sysfs, and the system call interface, with important exceptions. When "in-tree" code or userspace applications are "broken" by kernel changes, the offending patches are typically quickly reverted. When proprietary code relies on kernel implementation details that are incidentally accessible from userspace, it is not protected and garners little sympathy when it breaks. When, as with Y2038, there is no way to avoid an ABI break, the transition is made as thoughtfully and methodically as possible. Newer features like BPF programs present as-yet-unanswered questions about where exactly the ABI-stability border lies.


Thanks to Akkana Peck, Sarah R. Newman, and Luke S. Crawford for their helpful comments on early versions of this material.

Familiarize yourself with the concept of an ABI, why ABI stability matters, and what is included in Linux's stable ABI.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How the Linux Worker file manager can work for you

Tue, 12/06/2022 - 16:00
How the Linux Worker file manager can work for you Seth Kenlon Tue, 12/06/2022 - 03:00

Computers are like filing cabinets, full of virtual folders and files waiting to be referenced, cross-referenced, edited, updated, saved, copied, moved, renamed, and organized. In this article, I'm taking a look at the Worker file manager for your Linux system.

The Worker file manager dates back to 1999. That's the previous century, and a good seven years before I'd boot into Linux for the first time. Even so, it's still being updated today, but judiciously. Worker isn't an application where arbitrary changes get made, and using Worker today is a lot like using Worker 20 years ago, only better.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Install Worker on Linux

Worker is written in C++ and uses only the basic X11 libraries for its GUI, plus a few extra libraries for extra functionality. You may be able to find Worker in your software repository. For example, on Debian, Elementary, Linux Mint, and similar:

$ sudo apt install worker

Worker is open source, so you can alternately just download the source code and compile it yourself.

Using Worker

Worker is a two-panel, tabbed file manager. Most of the Worker window is occupied by these panels, listing the files and directories on your system. The panes function independently of one another, so what's displayed in the left pane has no relationship to what's in the right. That's by design. This isn't a column view of a hierarchy, these are two separate locations within a filesystem, and for that reason, you can easily copy or move a file from one pane to the other (which is, after all, probably half the reason you use a file manager at all.)

To descend into a directory, double-click. To open a file in the default application defined for your system, double-click. All in all, there are a lot of intuitive and "obvious" interactions in Worker. It may look old-fashioned, but in the end, it's still just a file manager. What you might not be used to, though, is that Worker actions are largely based around the keyboard or on-screen buttons. To copy a file from one pane to the other, you can either press F5 on your keyboard, or click the F5 - Copy button at the bottom of the Worker window. There are two panels, so the destination panel is always the one that isn't active.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Actions

The most common actions are listed as buttons at the bottom of the Worker window. For example:

  • $HOME Changes active pane to your home directory

  • F4 - Edit: Open a file in a text editor

  • F5 - Copy: Copy active selection to inactive pane

  • F6 - Move: Move active selection to inactive pane

  • F7 - New directory: Make new directory in active pane

  • Duplicate: Copy active selection to active pane

You don't have to use the buttons, though. As the buttons indicate, there are many keyboard shortcuts defined, and even more capable of being assigned in your Working configuration. These are some of the actions I found myself using most:

  • Tab: Change active pane

  • Ctrl+Return: Edit path of active pane

  • Home and End: Jump to the first or last entry in a file list

  • Left Arrow: Go to the parent directory

  • Right Arrow: Go to the selected directory

  • Insert: Change the selection state of the currently active entry

  • NumLock +: Select all (like Ctrl+A)

  • NumLock -: Select none

  • Return: Double click

  • Alt+B: Show bookmarks

  • Ctrl+Space: Open contextual menu

  • Ctrl+S: Filter by filename

I was ambivalent about Worker until I started using the keyboard shortcuts. While it's nice that you can interact with Worker using a mouse, it's actually most effective as a viewport for the whims of your file management actions. Unlike controlling many graphical file managers with a keyboard, Worker's keyboard shortcuts are specific to very precise actions and fields. And because there are always two panes open, your actions always have a source and a target.

It doesn't take long to get into the rhythm of Worker. First, you set up the location of the panes by pressing Tab to make one active, and Ctrl+Return to edit the path. Once you have each pane set, select the file you want to interact with, and press the keyboard shortcut for the action you want to invoke (Return to open, F5 to copy, and so on.) It's a little like a visual version of a terminal. Admittedly, it's slower than a terminal, because it lacks tabbed completion, but if you're in the mood for visual confirmation of how you're moving about your system, Worker is a great option.


If you're not a fan of keyboard navigation, then there are plenty of options for using buttons instead. The buttons at the bottom of Worker are "banks" of buttons. Instead of showing all possible buttons, Worker displays just the most common actions. You can set how many buttons you want displayed (by default it's 4 rows, but I set it to 2). To "scroll" to the next bank of buttons, click the clock bar at the very bottom of the window.


Click the gear icon in the top left corner of the Worker window to view its configuration options. In the configuration window, you can adjust Worker's color scheme, fonts, mouse button actions, keyboard shortcuts, default applications, default actions for arbitrary file types, and much more.

Get to work

Worker is a powerful file manager, with few dependencies and a streamlined code base. The features I've listed in this article are a fraction of what it's capable of doing. I haven't even mentioned Worker's Git integration, archive options, and interface for executing your own custom commands. What I'm trying to say is that Worker lives up to its name, so try it out and put it to work for you.

Worker is a powerful Linux file manager, with few dependencies and a streamlined code base. Here are a few of my favorite features.

Image by:

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

6 steps to get verified on Mastodon with encrypted keys

Mon, 12/05/2022 - 16:00
6 steps to get verified on Mastodon with encrypted keys Seth Kenlon Mon, 12/05/2022 - 03:00

Mastodon permits its users to self-verify. The easiest method to do this is through a verification link. For advanced verification, though, you can use the power of shared encrypted keys, which Mastodon can link to thanks to the open source project Keyoxide.


Pretty good privacy (PGP) is a standard for shared key encryption. All PGP keys come in pairs. There's a public key, for use by anyone in the world, and a secret key, for use by only you. Anyone with your public key can encode data for you, and once it's encrypted only your secret key (which only you have access to) can decode it again.

If you don't already have a key pair, the first step for encrypted verification is to generate one.

There are many ways to generate a PGP key pair, but I recommend the open source GnuPG suite.

On Linux, GnuPG is already installed.

On Windows, download and install GPG4Win, which includes the Kleopatra desktop application.

On macOS, download and install GPGTools.

1. Create a key pair

If you already have a GPG key pair, you can skip this step. You do not need to create a unique key just for Mastodon.

To create a new key, you can use the Kleopatra application. Go to the File menu and select New key pair. In the Key Pair Creation Wizard that appears, click Create a personal OpenPGP key pair. Enter your name and a valid email address, and select the Protect the generated key with a passphrase option. Click Create to generate your key pair.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Alternately, you can use the terminal:

$ gpg2 --full-generate-key

Follow the prompts until you have generated a key pair.

2. Add notation

Now that you have a key, you must add special metadata to it. This step requires the use of the terminal (Powershell on Windows) but it's highly interactive and isn't very complex.

First, take a look at your secret key:

gpg2 --list-secret-keys

The output displays your GnuPG keyring, containing at least one secret key. Locate the one you want to use for Mastodon (this might be the only key, if you've just created your first one today.) In the output, there's a long alphanumeric string just above a line starting with uid. That long number is your key's fingerprint. Here's an example:

sec   rsa4096 2022-11-17 [SC]
uid           [ultimate] Tux <>
ssb   rsa4096 2022-11-17 [E]

This example key's fingerprint is 22420E443871CF4313B9E90D50C9169F563E50CF. Select your key's fingerprint with your mouse, and then right-click and copy it to your clipboard. Then copy it into a document somewhere, because you're going to need it a lot during this process.

Now you can add metadata to the key. Enter the GnuPG interface using the gpg2 --edit-key command along with the fingerprint:

gpg2 --edit-key 22420E443871CF4313B9E90D50C9169F563E50CF

At the GnuPG prompt, select the user ID (that's your name and email address) you want to use as your verification method. In this example, there's only one user ID (uid [ultimate] Tux <>) so that's user ID 1:

gpg> uid 1

Designate this user as the primary user of the key:

gpg> primary

For Keyoxide to recognize your Mastodon identity, you must add a special notation:

gpg> notation

The notation metadata, at least in this context, is data formatted to the Ariadne specification. The metadata starts with and is followed by the URL of your Mastodon profile page.

In a web browser, navigate to your Mastodon profile page, and copy the URL. For me, in this example, the URL is, so Enter the notation at the GnuPG prompt:

gpg> notation
Enter the notation:

That's it. Type save to save and exit GnuPG.

gpg> save3. Export your key

Next, export your key. To do this in Kleopatra, select your key and click the Export button in the top toolbar.

Alternately, you can use the terminal. Reference your key by its fingerprint (I told you that you'd be using it a lot):

gpg2 --armor --export \
22420E443871CF4313B9E90D50C9169F563E50CF > pubkey.asc

Either way, you end up with a public key ending in .asc. It's always safe to distribute your public key. (You would never, of course, distribute your secret key because it is, as its very name implies, meant to be secret.)

4. Upload your key

Open your web browser and navigate to

On the website, click the Upload link to upload your exported key. Do this even if you've had a GPG key for years and know all about the --send-key option. This is a unique step to the Keyoxide process, so don't skip it.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

After your key's been uploaded, click the Send confirmation email button next to your email address so you can confirm that you own the email your key claims it belongs to. It can take 15 minutes or so, but when you receive an email from, click the confirmation link to verify your email.

5. Add your key to Mastodon

Now that everything's set up, you can use Keyoxide as your verification link for Mastodon. Go to your Mastodon profile page and click the Edit profile link.

On the Edit profile page, scroll down to the Profile Metadata section. Type PGP (or anything you want) into the Label field. In the Content field, type and then your key fingerprint. For me, in this example, the full URL is

Click the Save button and then return to your profile page.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

You can click the Keyoxide link in your profile to see your Keyoxide "profile" page. This page is actually just a rendering of the GPG key you created. Keyoxide's job is to parse your key, and to be a valid destination when you need to link to it (from Mastodon, or any other online service.)

6. Build trust

The old Twitter verification method was opaque and exclusive. Somebody somewhere claimed that somebody else somewhere else was really who they said they were. It proved nothing, unless you agree to accept that somewhere there's a reliable network of trust. Most people choose to believe that, because Twitter was a big corporation with lots at stake, and relatively few people (of the relative few who were granted it) complained about the accuracy of the Twitter blue checkmark.

Open source verification is different. It's available to everyone, and proves as much as Twitter verification did. But you can do even better. When you use encrypted keys for verification, you grant yourself the ability to have your peers review your identity and to digitally sign your PGP key as a testament that you are who you claim you are. It's a method of building a web of trust, so that when you link from Mastodon to your PGP key through Keyoxide, people can trust that you're really the owner of that digital key. I also means that several community members have met you in person and signed your key.

Help build human-centric trust online, and use PGP and Keyoxide to verify your identity. And if you see me at a tech conference, let's sign keys!

Get Mastodon's green checkmark with PGP and Keyoxide.

Tools Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.