Open-source News

FreeBSD 12.4 Released With Various Fixes & Improvements

Phoronix - Tue, 12/06/2022 - 18:13
For FreeBSD users not yet on the FreeBSD 13 stable series, FreeBSD 12.4 is now available as the newest point release to that N-1 series...

How to Find Linux OS Name and Kernel Version You Are Running

Tecmint - Tue, 12/06/2022 - 17:00
The post How to Find Linux OS Name and Kernel Version You Are Running first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

There are several ways of knowing the version of Linux you are running on your machine as well as your distribution name and kernel version plus some extra information that you may probably want

The post How to Find Linux OS Name and Kernel Version You Are Running first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

A data scientist's guide to open source community analysis

opensource.com - Tue, 12/06/2022 - 16:00
A data scientist's guide to open source community analysis Cali Dolfi Tue, 12/06/2022 - 03:00

In the golden age of data analysis, open source communities are not exempt from the frenzy around getting some big, fancy numbers onto presentation slides. Such information can bring even more value if you master the art of generating a well-analyzed question with proper execution.

You might expect me, a data scientist, to tell you that data analysis and automation will inform your community decisions. It's actually the opposite. Use data analysis to build on your existing open source community knowledge, incorporate others, and uncover potential biases and perspectives not considered. You might be an expert at implementing community events, while your colleague is a wiz at all things code. As each of you develops visualizations within the context of your own knowledge, you both can benefit from that information.

Let's have a moment of realness. Everyone has a thousand and one things to keep up with, and it feels like there is never enough time in a day to do so. If getting an answer about your community takes hours, you won't do it regularly (or ever). Spending the time to create a fully developed visualization makes it feasible to keep up with different aspects of the communities you care about.

With the ever-increasing pressure of being "data-driven," the treasure trove of information around open source communities can be a blessing and a curse. Using the methodology below, I will show you how to pick the needle out of the data haystack.

What is your perspective?

When thinking about a metric, one of the first things you must consider is the perspective you want to provide. The following are a few concepts you could establish.

Informative vs. influencing action: Is there an area of your community that is not understood? Are you taking that first step in getting there? Are you trying to decide on a particular direction? Are you measuring an existing initiative?

Exposing areas of improvement vs. highlighting strengths: There are times when you are trying to hype up your community and show how great it is, especially when trying to demonstrate business impact or advocate for your project. When it comes to informing yourself and the community, you can often get the most value from your metrics by identifying shortcomings. Highlighting strengths is not a bad practice, but there is a time and place. Don't use metrics as a cheerleader inside your community to tell you how great everyone is; instead, share that with outsiders for recognition or promotion.

Community and business impact: Numbers and data are the languages of many businesses. That can make it incredibly difficult to advocate for your community and truly show its value. Data can be a way to speak in their language and show what they want to see to get the rest of your messaging across. Another perspective is the impact on open source overall. How does your community impact others and the ecosystem?

These are not always either/or perspectives. Proper framing will help in creating a more deliberate metric.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

People often describe some version of this workflow when talking about general data science or machine learning work. I will focus on the first step, codifying problems and metrics, and briefly mention the second. From a data science perspective, this presentation can be considered a case study of this step. This step is sometimes overlooked, but your analysis's actual value starts here. You don't just wake up one day and know exactly what to look at. Begin with understanding what you want to know and what data you have to get you to the true goal of thoughtful execution of data analysis.

More on data science What is data science? What is Python? How to become a data scientist Data scientist: A day in the life Use JupyterLab in the Red Hat OpenShift Data Science sandbox Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint MariaDB and MySQL cheat sheet Latest data science articles 3 data analysis use cases in open source

Here are three different scenarios you might run into in your open source data analysis journey.

Scenario 1: Current data analysis

Suppose you are starting to go down the analysis path, and you already know what you're looking into is generally useful to you/your community. How can you improve? The idea here is to build off "traditional" open source community analysis. Suppose your data indicates you have had 120 total contributors over the project's lifetime. That's a value you can put on a slide, but you can't make decisions from it. Start taking incremental steps from just having a number to having insights. For example, you can break out the sample of total contributors into active versus drifting contributors (contributors who have not contributed in a set amount of time) from the same data.

Scenario 2: Community campaign impact measurement Image by:

(Cali Dolfi, CC BY-SA 4.0)

Consider meetups, conferences, or any community outreach initiative. How do you view your impacts and goals? These two steps actually feed into each other. Once you establish the campaign goals, determine what can be measured to detect the effect. That information helps set the campaign's goals. It's easy to fall into the trap of being vague rather than concrete with plans when a campaign begins.

Scenario 3: Form new analysis areas to impact Image by:

(Cali Dolfi, CC BY-SA 4.0)

This situation occurs when you work from scratch in data analysis. The previous examples are different parts of this workflow. The workflow is a living cycle; you can always make improvements or extensions. From this concept, the following are the necessary steps you should work through. Later in this article, there will be three different examples of how this approach works in the real world.

Step 1: Break down focus areas and perspectives

First, consider a magic eight ball—the toy you can ask anything, shake, and get an answer. Think about your analysis area. If you could get any answer immediately, what would it be?

Next, think about the data. From your magic eight-ball question, what data sources could have anything to do with the question or focus area?

What questions could be answered in the data context to move you closer to your proposed magic eight-ball question? It's important to note that you must consider the assumptions made if you try to bring all the data together.

Step 2: Convert a question to a metric

Here is the process for each sub-question from the first step:

  • Select the specific data points needed.
  • Determine visualization to get the goal analysis.
  • Hypothesize the impacts of this information.

Next, bring in the community to provide feedback and trigger an iterative development process. The collaborative portion of this can be where the real magic happens. The best ideas often come when bringing a concept to someone that inspires them in a way you or they would not have imagined.

Step 3: Analysis in action

This step is where you start working through the implications of the metric or visualization you have created.

The first thing to consider is if this metric follows what is currently known about the community.

  • If yes: Are there assumptions made that catered to the results?
  • If no: You want to investigate further whether this is potentially a data or calculation issue or if it is just a previously misunderstood part of the community.

Once you have determined if your analysis is stable enough to make inferences on, you can start to implement community initiatives on the information. As you are taking in the analysis to determine the next best step, you should identify specific ways to measure the initiative's success.

Now, observe these community initiatives informed by your metric. Determine if the impact is observable by your priorly established measurement of success. If not, consider the following:

  • Are you measuring the right thing?
  • Does the initiative strategy need to change?
Example analysis area: New contributors What is my magic eight-ball question?
  • Do people have an experience that establishes them as consistent contributors?
What data do I have that goes into the analysis area and magic eight-ball question?
  • What contributor activity exists for repos, including timestamps?

Now that you have the information and a magic eight-ball question, break the analysis down into subparts and follow each of them to the end. This idea correlates with steps 2 and 3 above.

Sub-question 1: "How are people coming into this project?"

This question aims to see what new contributors are doing first.

Data: GitHub data on first contributions over time (issues, PR, comments, etc.).

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Visualization: Bar chart with first-time contributions broken down by quarter.

Potential extension: After you talk with other community members, further examination breaks the information down by quarter and whether the contributor was a repeat or drive-by. You can see what people are doing when they come in and if that tells you anything about whether they will stick around.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Potential actions informed by this information:

  • Does the current documentation support contributors for the most common initial contribution? Could you support those contributors better, and would that help more of them stay?
  • Is there a contribution area that is not common overall but is a good sign for a repeat contributor? Perhaps PR is a common area for repeat contributors, but most people don't work in that area.

Action items:

  • Label "good first issues" consistently and link these issues to the contribution docs.
  • Add a PR buddy to these.

Sub-question 2: "Is our code base really dependent on drive-by contributors?"

Data: Contribution data from GitHub.

Image by:

(Cali Dolfi, CC BY-SA 4.0)

Visualization: "Total contributions: Broken down by contributions by drive-by and repeat contributor."

Potential actions informed by this information:

  • Does this ratio achieve the program's goals? Is a lot of the work done by drive-by contributors? Is this an underutilized resource, and is the project not doing its part to bring them in?
Analysis: Lessons learned

Number and data analysis are not "facts." They can say anything, and your internal skeptic should be very active when working with data. The iterative process is what will bring value. You don't want your analysis to be a "yes man." Take time to take a step back and evaluate the assumptions you've made.

If a metric just points you in a direction to investigate, that is a huge win. You can't look at or think of everything. Rabbit holes can be a good thing, and conversation starters can bring you to a new place.

Sometimes exactly what you want to measure is not there, but you might be able to get valuable details. You can't assume that you have all the puzzle pieces to get an exact answer to your original question. If you start to force an answer or solution, you can take yourself down a dangerous path led by assumptions. Leaving room for the direction or goal of analysis to change can lead you to a better place or insight than your original idea.

Data is a tool. It is not the answer, but it can bring together insights and information that would not have been accessible otherwise. The methodology of breaking down what you want to know into manageable chunks and building on that is the most important part.

Open source data analysis is a great example of the care you must take with all data science:

  • The nuance of the topic area is the most important.
  • The process of working through "what to ask/answer" is often overlooked.
  • Knowing what to ask can be the hardest part, and when you come up with something insightful and innovative, it's much more than whatever tool you choose.

If you are a community member with no data science experience looking at where to start, I hope this information shows you how important and valuable you can be to this process. You bring the insights and perspectives of the community. If you are a data scientist or someone implementing the metrics or visualizations, you have to listen to the voices around you, even if you are also an active community member. More information on data science is listed at the end of this article.

Wrap up

Use the above example as a framework for establishing data analysis of your own open source project. There are many questions to ask of your results, and knowing both the questions and their answers can lead your project in an exciting and fruitful direction.

More on data science

Consider the following sources for more information on data science and the technologies that provide it with data:

Consider this framework for establishing data analysis of your own open source project.

Image by:

Opensource.com

Data Science Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A 10-minute guide to the Linux ABI

opensource.com - Tue, 12/06/2022 - 16:00
A 10-minute guide to the Linux ABI Alison Chaiken Tue, 12/06/2022 - 03:00

Many Linux enthusiasts are familiar with Linus Torvalds' famous admonition, "we don't break user space," but perhaps not everyone who recognizes the phrase is certain about what it means.

The "#1 rule" reminds developers about the stability of the applications' binary interface via which applications communicate with and configure the kernel. What follows is intended to familiarize readers with the concept of an ABI, describe why ABI stability matters, and discuss precisely what is included in Linux's stable ABI. The ongoing growth and evolution of Linux necessitate changes to the ABI, some of which have been controversial.

What is an ABI?

ABI stands for Applications Binary Interface. One way to understand the concept of an ABI is to consider what it is not. Applications Programming Interfaces (APIs) are more familiar to many developers. Generally, the headers and documentation of libraries are considered to be their API, as are standards documents like those for HTML5, for example. Programs that call into libraries or exchange string-formatted data must comply with the conventions described in the API or expect unwanted results.

ABIs are similar to APIs in that they govern the interpretation of commands and exchange of binary data. For C programs, the ABI generally comprises the return types and parameter lists of functions, the layout of structs, and the meaning, ordering, and range of enumerated types. The Linux kernel remains, as of 2022, almost entirely a C program, so it must adhere to these specifications.

"The kernel syscall interface" is described by Section 2 of the Linux man pages and includes the C versions of familiar functions like "mount" and "sync" that are callable from middleware applications. The binary layout of these functions is the first major part of Linux's ABI. In answer to the question, "What is in Linux's stable ABI?" many users and developers will respond with "the contents of sysfs (/sys) and procfs (/proc)." In fact, the official Linux ABI documentation concentrates mostly on these virtual filesystems.

The preceding text focuses on how the Linux ABI is exercised by programs but fails to capture the equally important human aspect. As the figure below illustrates, the functionality of the ABI requires a joint, ongoing effort by the kernel community, C compilers (such as GCC or clang), the developers who create the userspace C library (most commonly glibc) that implements system calls, and binary applications, which much be laid out in accordance with the Executable and Linking Format (ELF).

Image by:

(Alison Chaiken, CC BY-SA 4.0)

Why do we care about the ABI?

The Linux ABI stability guarantee that comes from Torvalds himself enables Linux distros and individual users to update the kernel independently of the operating system.

If Linux did not have a stable ABI, then every time the kernel needed patching to address a security problem, a large part of the operating system, if not the entirety, would need to be reinstalled. Obviously, the stability of the binary interface is a major contributing factor to Linux's usability and wide adoption.

Image by:

(Alison Chaiken, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

As the second figure illustrates, both the kernel (in linux-libc-dev) and Glibc (in libc6-dev) provide bitmasks that define file permissions. Obviously the two sets of definitions must agree! The apt package manager identifies which software project provided each file. The potentially unstable part of Glibc's ABI is found in the bits/ directory.

For the most part, the Linux ABI stability guarantee works just fine. In keeping with Conway's Law, vexing technical issues that arise in the course of development most frequently occur due to misunderstandings or disagreements between different software development communities that contribute to Linux. The interface between communities is easy to envision via Linux package-manager metadata, as shown in the image above.

Y2038: An example of an ABI break

The Linux ABI is best understood by considering the example of the ongoing, slow-motion "Y2038" ABI break. In January 2038, 32-bit time counters will roll over to all zeroes, just like the odometer of an older vehicle. January 2038 sounds far away, but assuredly many IoT devices sold in 2022 will still be operational. Mundane products like smart electrical meters and smart parking systems installed this year may or may not have 32-bit processor architectures and may or may not support software updates.

The Linux kernel has already moved to a 64-bit time_t opaque data type internally to represent later timepoints. The implication is that system calls like time() have already changed their function signature on 64-bit systems. The arduousness of these efforts is on ready display in kernel headers like time_types.h, which includes new and "_old" versions of data structures.

Image by:

(marneejill, CC BY-SA 2.0)

The Glibc project also supports 64-bit time, so yay, we're done, right? Unfortunately, no, as a discussion on the Debian mailing list makes clear. Distros are faced with the unenviable choice of either providing two versions of all binary packages for 32-bit systems or two versions of installation media. In the latter case, users of 32-bit time will have to recompile their applications and reinstall. As always, proprietary applications will be a real headache.

What precisely is in the Linux stable ABI anyway?

Understanding the stable ABI is a bit subtle. Consider that, while most of sysfs is stable ABI, the debug interfaces are guaranteed to be unstable since they expose kernel internals to userspace. In general, Linus Torvalds has pronounced that by "don't break userspace," he means to protect ordinary users who "just want it to work" rather than system programmers and kernel engineers, who should be able to read the kernel documentation and source code to figure out what has changed between releases. The distinction is illustrated in the figure below.

Image by:

(Alison Chaiken, CC BY-SA 4.0)

Ordinary users are unlikely to interact with unstable parts of the Linux ABI, but system programmers may do so inadvertently. All of sysfs (/sys) and procfs (/proc) are guaranteed stable except for /sys/kernel/debug.

But what about other binary interfaces that are userspace-visible, including miscellaneous ABI bits like device files in /dev, the kernel log file (readable with the dmesg command), filesystem metadata, or "bootargs" provided on the kernel "command line" that are visible in a bootloader like GRUB or u-boot? Naturally, "it depends."

Mounting old filesystems

Next to observing a Linux system hang during the boot sequence, having a filesystem fail to mount is the greatest disappointment. If the filesystem resides on an SSD belonging to a paying customer, the matter is grave indeed. Surely a Linux filesystem that mounts with an old kernel version will still mount when the kernel is upgraded, right? Actually, "it depends."

In 2020 an aggrieved Linux developer complained on the kernel's mailing list:

The kernel already accepted this as a valid mountable filesystem format, without a single error or warning of any kind, and has done so stably for years. . . . I was generally under the impression that mounting existing root filesystems fell under the scope of the kernel<->userspace or kernel<->existing-system boundary, as defined by what the kernel accepts and existing userspace has used successfully, and that upgrading the kernel should work with existing userspace and systems.

But there was a catch: The filesystems that failed to mount were created with a proprietary tool that relied on a flag that was defined but not used by the kernel. The flag did not appear in Linux's API header files or procfs/sysfs but was instead an implementation detail. Therefore, interpreting the flag in userspace code meant relying on "undefined behavior," a phrase that will make software developers almost universally shudder. When the kernel community improved its internal testing and started making new consistency checks, the "man 2 mount" system call suddenly began rejecting filesystems with the proprietary format. Since the format creator was decidedly a software developer, he got little sympathy from kernel filesystem maintainers.

Image by:

(Kernel developers working in-tree are protected from ABI changes. Alison Chaiken, CC BY-SA 4.0)

Threading the kernel dmesg log

Is the format of files in /dev guaranteed stable or not? The command dmesg reads from the file /dev/kmsg. In 2018, a developer made output to dmesg threaded, enabling the kernel "to print a series of printk() messages to consoles without being disturbed by concurrent printk() from interrupts and/or other threads." Sounds excellent! Threading was made possible by adding a thread ID to each line of the /dev/kmsg output. Readers following closely will realize that the addition changed the ABI of /dev/kmsg, meaning that applications that parse that file needed to change too. Since many distros didn't compile their kernels with the new feature enabled, most users of /bin/dmesg won't have noticed, but the change broke the GDB debugger's ability to read the kernel log.

Assuredly, astute readers will think users of GDB are out of luck because debuggers are developer tools. Actually, no, since the code that needed to be updated to support the new /dev/kmsg format was "in-tree," meaning part of the kernel's own Git source repository. The failure of programs within a single repo to work together is just an out-and-out bug for any sane project, and a patch that made GDB work with threaded /dev/kmsg was merged.

What about BPF programs?

BPF is a powerful tool to monitor and even configure the running kernel dynamically. BPF's original purpose was to support on-the-fly network configuration by allowing sysadmins to modify packet filters from the command line instantly. Alexei Starovoitov and others greatly extended BPF, giving it the power to trace arbitrary kernel functions. Tracing is clearly the domain of developers rather than ordinary users, so it is certainly not subject to any ABI guarantee (although the bpf() system call has the same stability promise as any other). On the other hand, BPF programs that create new functionality present the possibility of "replacing kernel modules as the de-facto means of extending the kernel." Kernel modules make devices, filesystems, crypto, networks, and the like work, and therefore clearly are a facility on which the "just want it to work" user relies. The problem arises that BFP programs have not traditionally been "in-tree" as most open-source kernel modules are. A proposal in spring 2022 to provide support to the vast array of human interface devices (HIDs) like mice and keyboards via tiny BPF programs rather than patches to device drivers brought the issue into sharp focus.

A rather heated discussion followed, but the issue was apparently settled by Torvalds' comments at Open Source Summit:

He specified if you break 'real user space tools, that normal (non-kernel developers) users use,' then you need to fix it, regardless of whether it is using eBPF or not.

A consensus appears to be forming that developers who expect their BPF programs to withstand kernel updates will need to submit them to an as-yet unspecified place in the kernel source repository. Stay tuned to find out what policy the kernel community adopts regarding BPF and ABI stability.

Conclusion

The kernel ABI stability guarantee applies to procfs, sysfs, and the system call interface, with important exceptions. When "in-tree" code or userspace applications are "broken" by kernel changes, the offending patches are typically quickly reverted. When proprietary code relies on kernel implementation details that are incidentally accessible from userspace, it is not protected and garners little sympathy when it breaks. When, as with Y2038, there is no way to avoid an ABI break, the transition is made as thoughtfully and methodically as possible. Newer features like BPF programs present as-yet-unanswered questions about where exactly the ABI-stability border lies.

Acknowledgments

Thanks to Akkana Peck, Sarah R. Newman, and Luke S. Crawford for their helpful comments on early versions of this material.

Familiarize yourself with the concept of an ABI, why ABI stability matters, and what is included in Linux's stable ABI.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How the Linux Worker file manager can work for you

opensource.com - Tue, 12/06/2022 - 16:00
How the Linux Worker file manager can work for you Seth Kenlon Tue, 12/06/2022 - 03:00

Computers are like filing cabinets, full of virtual folders and files waiting to be referenced, cross-referenced, edited, updated, saved, copied, moved, renamed, and organized. In this article, I'm taking a look at the Worker file manager for your Linux system.

The Worker file manager dates back to 1999. That's the previous century, and a good seven years before I'd boot into Linux for the first time. Even so, it's still being updated today, but judiciously. Worker isn't an application where arbitrary changes get made, and using Worker today is a lot like using Worker 20 years ago, only better.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Install Worker on Linux

Worker is written in C++ and uses only the basic X11 libraries for its GUI, plus a few extra libraries for extra functionality. You may be able to find Worker in your software repository. For example, on Debian, Elementary, Linux Mint, and similar:

$ sudo apt install worker

Worker is open source, so you can alternately just download the source code and compile it yourself.

Using Worker

Worker is a two-panel, tabbed file manager. Most of the Worker window is occupied by these panels, listing the files and directories on your system. The panes function independently of one another, so what's displayed in the left pane has no relationship to what's in the right. That's by design. This isn't a column view of a hierarchy, these are two separate locations within a filesystem, and for that reason, you can easily copy or move a file from one pane to the other (which is, after all, probably half the reason you use a file manager at all.)

To descend into a directory, double-click. To open a file in the default application defined for your system, double-click. All in all, there are a lot of intuitive and "obvious" interactions in Worker. It may look old-fashioned, but in the end, it's still just a file manager. What you might not be used to, though, is that Worker actions are largely based around the keyboard or on-screen buttons. To copy a file from one pane to the other, you can either press F5 on your keyboard, or click the F5 - Copy button at the bottom of the Worker window. There are two panels, so the destination panel is always the one that isn't active.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Actions

The most common actions are listed as buttons at the bottom of the Worker window. For example:

  • $HOME Changes active pane to your home directory

  • F4 - Edit: Open a file in a text editor

  • F5 - Copy: Copy active selection to inactive pane

  • F6 - Move: Move active selection to inactive pane

  • F7 - New directory: Make new directory in active pane

  • Duplicate: Copy active selection to active pane

You don't have to use the buttons, though. As the buttons indicate, there are many keyboard shortcuts defined, and even more capable of being assigned in your Working configuration. These are some of the actions I found myself using most:

  • Tab: Change active pane

  • Ctrl+Return: Edit path of active pane

  • Home and End: Jump to the first or last entry in a file list

  • Left Arrow: Go to the parent directory

  • Right Arrow: Go to the selected directory

  • Insert: Change the selection state of the currently active entry

  • NumLock +: Select all (like Ctrl+A)

  • NumLock -: Select none

  • Return: Double click

  • Alt+B: Show bookmarks

  • Ctrl+Space: Open contextual menu

  • Ctrl+S: Filter by filename

I was ambivalent about Worker until I started using the keyboard shortcuts. While it's nice that you can interact with Worker using a mouse, it's actually most effective as a viewport for the whims of your file management actions. Unlike controlling many graphical file managers with a keyboard, Worker's keyboard shortcuts are specific to very precise actions and fields. And because there are always two panes open, your actions always have a source and a target.

It doesn't take long to get into the rhythm of Worker. First, you set up the location of the panes by pressing Tab to make one active, and Ctrl+Return to edit the path. Once you have each pane set, select the file you want to interact with, and press the keyboard shortcut for the action you want to invoke (Return to open, F5 to copy, and so on.) It's a little like a visual version of a terminal. Admittedly, it's slower than a terminal, because it lacks tabbed completion, but if you're in the mood for visual confirmation of how you're moving about your system, Worker is a great option.

Buttons

If you're not a fan of keyboard navigation, then there are plenty of options for using buttons instead. The buttons at the bottom of Worker are "banks" of buttons. Instead of showing all possible buttons, Worker displays just the most common actions. You can set how many buttons you want displayed (by default it's 4 rows, but I set it to 2). To "scroll" to the next bank of buttons, click the clock bar at the very bottom of the window.

Configuration

Click the gear icon in the top left corner of the Worker window to view its configuration options. In the configuration window, you can adjust Worker's color scheme, fonts, mouse button actions, keyboard shortcuts, default applications, default actions for arbitrary file types, and much more.

Get to work

Worker is a powerful file manager, with few dependencies and a streamlined code base. The features I've listed in this article are a fraction of what it's capable of doing. I haven't even mentioned Worker's Git integration, archive options, and interface for executing your own custom commands. What I'm trying to say is that Worker lives up to its name, so try it out and put it to work for you.

Worker is a powerful Linux file manager, with few dependencies and a streamlined code base. Here are a few of my favorite features.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The Best Tools for Creating Fillable PDF Forms on Linux

Tecmint - Tue, 12/06/2022 - 13:08
The post The Best Tools for Creating Fillable PDF Forms on Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Brief: In this article, you will find the best applications that can be used to create PDF files with fillable fields, also known as interactive forms, on Linux. If you need a powerful tool

The post The Best Tools for Creating Fillable PDF Forms on Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Fedora 38 Cleared To Produce "Mobility Phosh" Spins

Phoronix - Tue, 12/06/2022 - 07:25
The Fedora Engineering and Steering Committee (FESCo) has provided their blessing to begin creating new x86_64 and AArch64 ISO images for mobile devices that feature the Phosh Wayland compositor...

Armbian 22.11 Released With RISC-V 64-bit UEFI Build Support, New Arm Boards

Phoronix - Tue, 12/06/2022 - 04:00
Armbian 22.11 is now available as the Debian/Ubuntu-based Linux distribution popular with ARM development boards and supporting a wide range of hardware from different vendors...

Pages