opensource.com

Subscribe to opensource.com feed
Updated: 52 sec ago

3 delightful features of the Linux QtFM file manager

Thu, 12/22/2022 - 16:00
3 delightful features of the Linux QtFM file manager sethkenlon Thu, 12/22/2022 - 03:00

QtFM is a simple file manager that aims to provide the basic features of file management through a fast and intuitive interface. It's available for Linux, BSD, and macOS.

QtFM, as its name suggests, uses the Qt (canonically pronounced "cute") programming toolkit. I've worked with the Qt toolkit both in C++ and Python, and using it is always a pleasure. It's cross-platform, it's got multiple levels of useful abstraction so developers don't have to interact directly with vendor-specific SDKs, and it's highly configurable. From a user's perspective, it's a "natural" and fast experience, whether you're on the latest hardware or on an old computer.

Using QtFM

There's not much to QtFM. It focuses on being exactly what its name claims: a file manager (FM) for Qt. The layout is what you probably expect from a file manager: a list of common places and devices on the left and a list of files on the right.

Image by:

(Seth Kenlon, CC-BY-SA 4.0)

It's got just four menus.

  • File: Create a new file or folder, open a new tab or window, or exit the application.

  • Edit: Copy, paste, move to trash, or create a new bookmark in the left panel.

  • View: Toggle between the list and icon views, adjust the layout.

  • Help: Licensing information, and links to online documentation.

Interacting with QtFM is largely the same experience you're probably used to with any standard-issue file manager. You can click around to navigate, open files in its default application, drag-and-drop files and folders, copy and paste files and folders, launch applications, and whatever else you do when you're interacting with the contents of your computer. It's familiar, so there's basically no learning curve and no unpleasant surprises.

There are, however, several pleasant surprises. Here are three of my favorites.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 1. Put a command into a contextual menu

With QtFM, you can add any command you can run in a terminal to the right-click contextual menu. For instance, suppose you want an option to convert an image into the webp format to the right-click menu. There's no complex framework or scripting language to learn, you don't need to develop a plugin. You can do it in just 3 steps:

  1. Go to the Edit menu and select Settings

  2. Click on the Custom actions tab

  3. Click the Add button and enter the command you want to run, using %f for the source file and %n for the new file

Image by:

(Seth Kenlon, CC BY-SA 4.0)

The action now appears in your QtFM contextual menu.

2. Flexible layout

One of the built-in features of the Qt toolkit is that many of its components are ("widgets") detachable. QtFM takes advantage of this and allows you to unlock its layout from the View menu. Once unlocked, you can drag toolbars and side panels, anchoring them in new positions around your window. I was able to combine the menu bar, navigation toolbar, and the URI field into a unified panel, and I placed a file tree on the right side of the window for convenience.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

This requires no special knowledge of application design or even configuration. You just unlock, drag and drop, and lock.

3. Tabbed view

Many Linux file managers offer tabs the same way as most web browsers do. It's a simple interface trick that lets you keep several locations handy. I don't know whether it actually saves time, but I always feel like it does. QtFM offers tabs, too, and there are two things I particularly enjoy about the way it implements them.

First of all, the tabs are at the bottom of the window by default (you can change that in Settings.) Because I tend to read from left to right and top to bottom, I usually prefer to have "extra" information at the bottom and right ends of a window. Of course, what constitutes "extra" information varies from user to user, so I don't blame any developer for placing widgets and panels in places I wouldn't put widgets and panels. It's nice, though, when a developer accidentally agrees with my preferences.

Secondly, the tabs are responsive. You can drag a file or folder from one tab into another just by hovering over your target tab. It feels as natural as dragging and dropping from one window to another.

Install QtFM

On Linux, your distribution may package QtFM in its software repository. If so, you can use your package manager to install. For example, on Debian and Debian-based systems:

$ sudo apt install qtfm

If your distribution doesn't offer QtFM, you may find a package for it on its website, or you can download the source code from its Git repository.

This Linux file manager does everything you'd expect it to, leaving no unpleasant surprises. But here are a few of pleasant surprises that make it worth giving it a try.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

6 Kubernetes articles every open source enthusiast should read

Thu, 12/22/2022 - 16:00
6 Kubernetes articles every open source enthusiast should read cherrybomb Thu, 12/22/2022 - 03:00

We learned a lot about Kubernetes in 2022. It seems every year, Kubernetes gets better and better. For all those beginners out there, this year's coverage is wide-ranging and detailed, including a couple of new eBooks. This article covers what I found to be the best Kubernetes articles of 2022. From visual maps to personal journeys, these articles definitely shine a light on the power of Kubernetes. So let's get started with my favorite ones.

A beginner's guide to cloud-native open source communities

At some point, we were all beginners when it came to cloud-native and open source communities. You may be a beginner now and wonder, "How do I get involved?" That's where Anita's article, A beginner's guide to cloud-native open source communities, helps. With great explanations and definitions from cloud-native in general to architecture, each section gives a breath of new knowledge for those new to the cloud-native environment. While explaining the cloud-native foundation and providing communities, Anita gives great detailed information. The best part about this article is the abundance of learning resources and a step-by-step guide on how to start your journey in the cloud-native ecosystem.

A visual guide to Kubernetes networking fundamentals

This article by Nived covers some Kubernetes networking fundamentals also used in real-world everyday networking. With networking being one of the more confusing Kubernetes topics for most people, these detailed graphs and explanations take you a long way toward understanding the day-to-day networking inside your cluster. The extensive visuals and detailed descriptions are amazing and helpful for the visual learners out there. If Kubernetes networking is something you need to brush up on or if you're just starting, A visual guide to Kubernetes networking fundamentals is a good place to begin.

Experiment with containers and pods on your own computer

Some people prefer learning by experimentation. Usually, that would be me. In Seth's article, you learn by exploring on your own equipment. While explaining tools and the differences between virtual machines overhead versus containers, Seth provides a great Apache build example. Experiment with containers and pods on your own computer also provides an eBook at the end to learn more about containers!

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles My journey with Kubernetes

Mike Dame documents his journey in his first-ever published work. While checking off a personal goal, Mike talks about how he started with Kubernetes at work with OpenShift. Mike met many people who expressed their confusion about Kubernetes Operators. Mike explains the narrative of his book's storyline for a high-level understanding of Kubernetes Operators in My journey with Kubernetes. He points out his goals and how he wants to provoke ideas using the concepts he's learned on his journey. This article is a good overview of what his journey and book provide for those who want to learn more about Kubernetes Operators.

Open source DevOps tools in a platform future

Open source DevOps tools in a platform future by Will Kelly is a great read about DevOps and open source tools. Starting with the current state of tools and the fact that they won't go away, this article offers three cool examples of utilities. With a brief overview of Git, Jenkins, and Kubernetes, Will explains why these tools are widely used and will stay around. Will also covers DevOps platforms and toolchains with a good advantages and disadvantages section, along with explaining how the work will change over time with teams and software.

A guide to container orchestration with Kubernetes

This article by Seth is a nice introduction to the newest Kubernetes orchestration eBook. While explaining what containers are and how to run them, Seth gives another great example of launching a container using Podman. Seth slowly builds up this introductory article by covering the sustainability of containers, creating pods of containers, and finally, clusters of pods and containers. Check out A guide to container orchestration with Kubernetes and the eBook if you want to know how orchestration works with Kubernetes.

3 honorable mentions

While I did cover some great articles here in this list, I'll give a couple more honorable mentions for this year:

Final thoughts

This year's Kubernetes articles cover a great breadth of knowledge for beginners and those who just need a place to start. If you're interested in starting with Kubernetes, check these articles out for fundamentals on where to begin.

This year's Kubernetes articles cover a great breadth of knowledge for beginners and those who just need a place to start.

Kubernetes Best of Opensource.com What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to migrate your code from PHP 7.4 to 8.1

Wed, 12/21/2022 - 16:00
How to migrate your code from PHP 7.4 to 8.1 gilzow Wed, 12/21/2022 - 03:00

The end-of-life (EOL) for PHP 7.4 was Monday, November 28, 2022. If you’re like me, that date snuck up much faster than anticipated. While your PHP 7.4 code isn’t going to immediately stop working, you do need to begin making plans for the future of this codebase.

What are your options?

You could continue to remain on PHP 7.4, but there are several benefits to updating. The biggest are security risk and support. As we move farther and farther away from the EOL date, attackers will turn their focus to PHP 7.4 knowing that any vulnerabilities they discover will go unpatched in the majority of systems. Staying on PHP 7.4 drastically increases the risk of your site being compromised in the future. In a similar vein, finding support for issues you encounter with PHP 7.4 will become increasingly more difficult. In addition, you will most likely begin to encounter compatibility issues with third-party code/packages as they update their code to be compatible with later versions and drop support for 7.4. You’ll also be missing out on significant speed and performance improvements introduced in 8.0 and further improved in 8.1. But upgrading all that legacy code is daunting!

Where to start?

Luckily, PHP provides an official migration guide from PHP 7.4 to 8.0 to get you started (and an 8.0 to 8.1 migration guide as well). Be sure to read through the Backward Incompatible Changes and Deprecated Features sections. While these guides are incredibly handy, you may very well have tens of thousands of lines of code to check, some of which you may have inherited. Luckily there are some options to help pinpoint potential problem areas in the migration.

PHPCodeSniffer + PHPCompatibility sniffs

PHPCodeSniffer (PCS) is a package for syntax checking of PHP Code. It checks your code against a collection of defined rules (aka “sniffs”) referred to as “standards”. PHPCodeSniffer ships with a collection of standards you can use including PEAR, PSR1, PSR2, PSR12, Squiz, and Zend. Luckily, you can write your own collection of sniffs to define any set of rules you like.

PHPCompability has entered the chat

PHPCompatibility “is a set of sniffs for PHP CodeSniffer that checks for PHP cross-version compatibility” allowing you to test your codebase for compatibility with different versions of PHP, including PHP 8.0 and 8.1. This means you can use PHPCodeSniffer to scan your codebase, applying the rules from PHPCompability to sniff out any incompatibilities with PHP 8.1 that might be present.

Before I continue…

While PHP8.2 was released on December 8, 2022, and I encourage you to begin looking over the official 8.1 to 8.2 migration guide and begin making plans to upgrade, most of the checkers I mention in this article have not completed full support for 8.2 at this time. For those reasons, I’ll be focusing on migrating the code to PHP8.1, and not 8.2.

In the process of writing this article, I discovered PHPCompatiblity has a known issue when checking for compatibility with PHP 8.0/8.1 where it will report issues that should be Errors as Warnings. The only workaround for now is to use the develop branch for PHPCompatibility instead of master. While they state it is stable, please be aware that in this article, I’m using the non-stable branch. You may want to weigh the pros and cons of using the develop branch before implementing it anywhere else than in a local development environment. While I found PCS+PHPCompatibility to be the most straightforward and comprehensive solution for checking for incompatible code, if you do not want to use a non-stable version of PCS, see the section at the end of the article about alternative options.

For the purposes of this article, I’ll be using the 1.4.6 version of SimpleSAMLphp to test for incompatibilities. This is a six-year-old version of the code base. I do this not to pick on SimpleSAMLphp, but because I wanted something that would definitely have some errors. As it turns out, all of the platform.sh code I tested, as well as my own code was already compatible with PHP8.1 and required no changes.

Get started

To get started, first clone your codebase, and then create a new branch. You’ll now need to decide if you want to install the dependencies and run the scans on your local machine or in a local development environment using something like DDEV, Lando, or Docksal. In this demo, I’m using DDEV. I suggest using a local development environment vs running directly on your local machine because while it’s not required to use the version of PHP you want to test against, for the best results, it is recommended you do so. If you don’t have PHP installed, or don’t have the target version installed, a local development environment allows you to create an ephemeral environment with exactly what you need without changing your machine.

After setting up your environment for PHP 8.1, at a terminal prompt (in my case, I’ve run ddev start and once the containers are available, shell into the web app using ddev ssh), you need to add these new packages so you use them to test with. I’ll be adding them with composer, however, there are multiple ways to install them if you would prefer to do so differently. If your codebase isn’t already using composer, you’ll need to do composer init before continuing.

Because you'll be using the develop branch of PHPCompatibility there are a couple of extra steps to do that aren’t in the regular installation instructions. First is that the develop branch of PHPCompatibility requires an alpha version of phpcsstandards/phpcsutils. Because it is marked as alpha, you'll need to let composer know this one package is OK to install even though it is below your minimum stability requirements.

$ composer require --dev phpcsstandards/phpcsutils:"^1.0@dev"

Next, install PHPCompatibility targeting the develop branch

$ composer require --dev phpcompatibility/php-compatibility:dev-develop

The develop branch also installs dealerdirect/phpcodesniffer-composer-installer so you don’t need to add it manually or direct PCS to this new standard.

To verify our new standards are installed, you'll have PCS display the standards it is aware of.

$ phpcs -i The installed coding standards are MySource, PEAR, PSR1, PSR2, PSR12, Squiz, Zend, PHPCompatibility, PHPCS23Utils and PHPCSUtils

Now that you know your standards are available, you can have PCS scan our code. To instruct PCS to use a specific standard, use the --standard option and tell it to use PHPCompatibility. However, you also need to tell PHPCompatibility which PHP version you want to test against. For that, use PCS’ --runtime-set option and pass it the key testVersion and value of 8.1.

Before you start the scan, the one issue remaining is that code you want to scan is in the root of the project (.) but the vendor directly is also in the project root. You don’t want the code in vendor scanned, as those aren’t packages you necessarily control. PCS allows you to tell it to not scan files/directories with the --ignore option. Finally, you want to see the progress as PCS parses the file so you'll pass in the -p option.

Putting it all together:

$ phpcs -p . --standard=PHPCompatibility --runtime-set testVersion 8.1 --ignore=*/vendor/*

This kicks off PCS which will output its progress as it scans through your project’s code. W indicates Warnings, and E indicates Errors. At the end of the scan it will output: a full report with the file containing the issue, the line number where the issue occurs, whether the issue is a Warning or an Error, and the specific issue discovered.

In general, Errors are things that will cause a fatal error in PHP 8.1 and will need to be fixed before you can migrate. Warnings can be things that have been deprecated in 8.0/8.1 but not yet removed or issues that PCS ran into while trying to parse the file.

Given that the report might be long, and is output all at once into your terminal, there are numerous options for changing the information that is included in the report, as well as multiple reporting formats.

As you begin to fix your code, you can rerun the report as many times as needed. However, at some point, you’ll need to test the code on an actual PHP8.1 environment with real data. If you’re using Platform.sh, which is as easy as creating a branch, changing a single line in your configuration file, and pushing that branch to us. You can check out this video to see how easy it is!

There’s too much to fix!

Now that you have a solid idea of what needs to be updated before you can migrate, you might be facing an incredible amount of work ahead of you. Luckily, you have some options to help you out. PCS ships with a code fixer called PHP Code Beautifier and Fixer (phpcbf). Running phpcbf is almost identical to running phpcs and most of the options are identical. The other option is Rector. Usage of these tools is beyond the scope of this article, but as with any automation, you’ll want to test and verify before promoting changes to production.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Alternative options

If for any reason you don’t feel comfortable using a non-stable version of PCS, you do have other options for checking your code.

Phan

Phan is a static code analyzer for PHP. It offers multiple levels of analysis and allows for incrementally strengthening that analysis.

“Static analysis needs to be introduced slowly if you want to avoid your team losing their minds.”

Phan doesn’t target just compatibility with newer versions, it can highlight areas of code that will error in later versions. However, there are some caveats when using Phan for checking compatibility:

  • Slower than PCS+PHPCompatibility.
  • Phan requires the ast php extension which is not available by default on Platform.sh (or in DDEV). You’ll need to install it in your local development environment and add it to your php.ini file. Alternatively, you can use the --allow-polyfill-parser option, but it is considerably slower.
  • Phan’s default reporting output isn’t as easy to read as other options
  • I came across an issue where if your code base sets a different vendor directory via composer’s [config:vendor-dir](https://getcomposer.org/doc/06-config.md#vendor-dir) option, it will error out stating it can’t find certain files in the vendor directory
  • As mentioned, Phan analyzes much more than just PHP8.1 compatibility. While certainly a strength in other situations, if your goal is to migrate from 7.4 to 8.1 as quickly as possible, you will have to parse through errors that are unrelated to version compatibility.
  • Requires you run it on the PHP version you want to target
PHPStan

Similar to Phan, PHPStan is a static code analyzer for PHP that promises to “find bugs without writing tests.” And a similar set of caveats apply:

  • Slower than either PCS or Phan
  • Analyzes much more than just PHP8.1 compatibility so depending on your current codebase, you will have to possibly parse through a bunch of errors that are unrelated to version compatibility
  • Requires you run it on the PHP version you want to target
PHP Parallel Lint

A very fast PHP linter that can lint your codebase for issues, but can also check for deprecations. While it is exceptionally fast, it is only a linter, and therefore can only surface deprecations that are thrown at compile time, not at runtime. In my example code, it only found 2 deprecations vs the 960 deprecations PCS uncovered.

Summary

Code migrations, while never fun, are crucial to minimizing organizational risk. Platform.sh gives you the flexibility to test your code using the same data and configurations as your production site, but in a siloed environment. Combine this with the tools above, and you have everything you need for a strong, efficient code migration.

This article originally published on the Platform.sh community site and has been republished with permission.

With the recent end-of-life for PHP 7.4, it's time to migrate your code. Here are a few options to do that.

Image by:

Opensource.com

Drupal Web development WordPress What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source solutions for EV charging

Wed, 12/21/2022 - 16:00
Open source solutions for EV charging jmpearce Wed, 12/21/2022 - 03:00

Maybe you hate pumping gas in the cold (or heat), or you care about the environment. Maybe the latest gas prices and general inflation has you thinking more about stretching your money. Perhaps you simply think electric vehicles (EVs) look cool. No matter the reason, you're excited about your next vehicle being an EV and you're not alone! The EV market share is set to expand to 30% by 2040. The US government provides a handy comparison tool to show that the cost of ownership of an EV easily beats owning and operating fossil fuel vehicles. Despite this, EV charging costs can still hit you hard in your wallet.

One of the most elegant ways to solve cost problems in general is to apply open source principles to accelerate innovation. Fortunately for you, this has been done in the EV charging area to find a way to get low-cost electricity and low-cost chargers.

To control the costs of EV charging, first you need low-cost electricity. In the old days, that would mean going from oil to coal, which is not a step up. Today, as it turns out, solar photovoltaic (PV) devices that convert sunlight directly into electricity normally provide the lowest-cost electricity. Coal companies are going bankrupt because they can no longer compete with clean solar power. This is also why solar power is seeing explosive growth all over the world. Many homeowners are putting solar panels on their roofs or on ground mounts in the backyard to cover all of their home’s electric needs. But how can you charge your EV with solar energy if you have limited roof area or a small backyard?

Open source PV parking canopy

One approach that major corporations are taking is to make a PV canopy over their parking lots. If you want to do this yourself, a new study provides a full mechanical and economic analysis of three novel open source PV canopy systems:

  1. Use an exclusively wood, single-parking-spot spanning system
  2. Use a wood and aluminum double-parking-spot spanning system
  3. Use a wood and aluminum cantilevered system

The designs are presented as 5-by-6 stall builds, but all three systems are scalable to any amount of parking spots required. This includes a 1-stall 6kW system to charge a single car at home (as shown below). All of the racks are rated for a 25-year expected lifetime to match the standard PV warranty.

Image by:

(Vandewetering, Hayibo, Pearce, CC-BY)

Explore open hardware What is open hardware? What is Raspberry Pi? What is an Arduino? Our latest open hardware articles

The open source PV canopies are all designed to withstand a brutal Canadian winter. They also follow Canada’s strict building codes. So if you live anywhere else, the system as designed should still work for you. The complete designs and bill of materials of the canopies are provided, along with basic instructions. They are released with an open source license that enables anyone to fabricate them following the spirit of the free book about DIY solar power collectors To Catch the Sun.

The results of the previously mentioned study show that open source designs are much less expensive than proprietary products. Single-span systems provide cost savings of 82-85%, double-span systems save 43-50%, and cantilevered systems save 31-40%.

Most importantly, the designs give you more than enough energy (if you have a normal commute) to cover your charging needs. In the first year of operation, PV canopies can provide 157% of the energy needed to charge the least efficient EV currently on the market.

Image by:

(OpenEVSE, CC-BY)

Open source EV chargers

Another way to cut the cost of EV ownership is to install an open source EV charger. OpenEVSE is an Arduino-based charging station composed of open source software and hardware which can be made DIY-style. They are small, lightweight, and portable, so you can use them at home or on the road.

OpenEVSE powers charging stations for many EV manufacturers all over the world. You can adapt it to fit your requirements. OpenEVSE is now quite mature and supports advanced features including adjustable current, temperature monitoring, and a real-time power display. You can buy the hardware pre-assembled and ready to go. If you want to save more money (and have more fun) buy a kit and build it yourself.

Image by:

(OpenEVSE, CC-BY)

I hope to see more designs of EV power solutions in the future. Keep your eyes open, roll up your sleeves for some DIY, and enjoy assembling your open source, solar-powered EV charging solutions!

Harness solar power, hardware, and open source to build your own electric vehicle charging station.

Image by:

Photo by Erik Witsoe on Unsplash

Sustainability Hardware What to read next Comparing solar power to traditional power generation the open way This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 must-read resources for using the Linux command line

Wed, 12/21/2022 - 16:00
5 must-read resources for using the Linux command line Jim Hall Wed, 12/21/2022 - 03:00

In the beginning, there was the command line. While modern Linux distributions include graphical desktops like GNOME and KDE, the command line remains one of the power features of every Linux system. With the command line, you can leverage a rich set of instructions to edit and manipulate files, control your system, and automate processes.

This year, our contributors wrote a lot of great articles about the Linux command line. Here are five of my favorite topics.

12 essential Linux commands for beginners

Don Watkins writes about this list of twelve essential commands to navigate the Linux command line. If you're new to Linux and want to explore the command line, this is a great list to help you get started.

3 steps to create an awesome UX in a CLI application

Creating a command line program with a great user experience (UX) is a tall order, but Noaa Barki shares three actionable steps to make it work. If you're building your own command line program, Noaa's article will help you to design the commands, design the interface, and provide for backward compatibility.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles How I configure Vim as my default editor on Linux

Vim is the venerable visual editor for Linux systems. Expanded from the original vi editor, Vim (vi improved) is a powerful and flexible editor. David Both writes about why Vim is a great editor, and how to set other programs to use Vim for editing.

Tips for using the Linux test command

Add extra flexibility to your shell scripts using the test command. Seth Kenlon wrote about easy and common ways to control the flow of your shell scripts by using conditional execution. You can test for files, file types, attributes, numbers, and do other comparisons to make your scripts more flexible.

6 Linux metacharacters I love to use on the command line

Don Watkins shared this list of special command line characters, including * to select a group of files or > to redirect the output of a command. If you're experimenting with the Linux command line, you may want to learn these important metacharacters to expand your command line usage.

Take a look at some or all of these author's links. You are sure to learn something new. If you are rusty with command line concepts these articles will show you the way of the command line.

The Linux command line remains to be one of the system's most powerful and beloved features.

Image by:

Opensource.com

Linux Best of Opensource.com Command line What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My 4 favorite features of the 4pane file manager on Linux

Wed, 12/21/2022 - 16:00
My 4 favorite features of the 4pane file manager on Linux sethkenlon Wed, 12/21/2022 - 03:00

4Pane is a multi-pane file manager for Linux that allows for customized layout, and provides quick access to traditional desktop conveniences as well as common Linux tools. 4Pane aims for speed over visual effects, and places the way you want to work above all else. In honor of its name, I've got a list of my four favorite features of this fine file manager.

1. Flexible interface Image by:

(Seth Kenlon, CC BY-SA 4.0)

The most prominent feature of the 4Pane window is the same as its name: there are four panes in the window by default. In a way, though, there's actually only two, or said another way, each of the two panes is divided into two columns. The column on the left is a directory tree of your current location (home, by default.) Files are never displayed in the left column. It's only a directory tree.

The adjacent column displays the contents of the selected directory. When you double-click on a file, it opens in its default application. When you double-click on a directory, that directory is revealed in the left column and the right column displays its contents.

This same model is duplicated in the other window pane.

4Pane only has 4 panes by default, but it doesn't enforce that view. If you're overwhelmed by the four-pane view, click on the View menu and select Unsplit panes. This displays just one pane of two columns. It's a simplified view compared to what's possible, but it's a nice place to start while you're getting used to the column-style for browsing files.

Splitting panes

The advantage of a split view is that you don't have to open another window to drag and drop a file or folder from one location to another. This isn't the predominant model for file managers, but it's a popular subset. 4Pane is one of the few, in my experience, that recognizes that it's not always convenient to work laterally. If you prefer to have your second pane at the bottom of the window, go to the View menu and select Split panes horizontally (meaning that the split is horizontal, so the panes are situated vertically to one another).

Image by:

(Seth Kenlon, CC BY-SA 4.0)

2. Tooltip preview

One of my favorite features of 4Pane is the tooltip preview. To activate this, click the photo icon in the top toolbar. With this active, all you have to do is roll your mouse over a file to see a preview of its contents in a tooltip. It may not be a feature you want active all the time. The tooltips can be distracting when you're just browsing files. However, if you're looking for something specific or if you're just not sure exactly what's in a directory, a quick wave of your mouse to get an overview of the contents of several files is satisfyingly efficient.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 3. Menu

The menu bar of 4Pane isn't quite like most file manager menu bars you may be accustomed to. There's a menu dedicated to archiving actions, mounting devices, and popular Linux commands such as grep and find.

For instance, in the Archive menu, you can choose to extract an archive or compressed file, create a new archive, add a file to an existing archive, compress a file, and more. I love Ark and similar utilities, but I also recognize how useful it is for a file manager to make those utilities unnecessary. Especially when you're on an old computer, the fewer applications you have to launch, the better.

Also impressive are the built-in front ends for grep and find. I'll admit that I probably won't use it often myself, but I never complain when a developer brings the power of Linux commands to users who aren't [yet] familiar with the terminal.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

The locate front end is probably the most useful of the bunch. It's fast and effective. There's just one field in the dialogue box, so it makes a file system search fast.

For example, say you're searching for the file Zombie-Apocalypse-Plan-B.txt because Plan A fell through, but in the heat of the moment (what with zombies knocking down your door, and all) you can't remember where you saved it. Go to the Tools menu and select locate. Type zombie in the search field, click the -i box so that your system ignores capitalization, and click OK. This returns both Zombie-Apocalypse-Plan-A.txt and Zombie-Apocalypse-Plan-B.txt.

Maybe that's good enough for you, or maybe you need a little more precision. In addition to -i for case insensitivity, you can click the -r option to leverage the power of regex. Type zombie.B. to narrow your search to a file starting with zombie and containing the letter B somewhere in the filename.

Effective and fast.

4. Undo

Finally, my (other) very favorite feature of 4pane is the Undo button. When you right click on a file or folder and select Delete, the item is sent to a secret location (it's not actually secret, but it's out of sight and out of mind). The item isn't scrubbed from the hard drive until you close the 4pane window. Up until then, you can always click the Undo button in the top toolbar to reverse decisions you've come to regret.

This is a separate action from sending a file to your system trash, so it is meant to masquerade as an actual delete action. The difference is that it's a delayed delete. That may not suit you. Some users are disciplined enough to send files to the system trash, but others skip the trash. This feature is designed to protect you from yourself by delaying deletion until you close the window. I find it a reasonable and invaluable feature, and it's the one feature that I've already benefited from several times.

Install 4Pane on Linux

If you're sold on 4Pane, or at least curious about it, then you should install it and try it out! On Linux, your distribution may package 4Pane in its software repository. If so, you can use your package manager to install. For example, on Fedora, Mageia, OpenMandriva, and similar:

$ sudo dnf install 4pane

On Debian and Debian-based systems:

$ sudo apt install 4pane

If your distribution doesn't carry 4Pane, you can download it from 4pane.co.uk.

Once installed, launch 4Pane from your application menu.

4Pane is a multi-pane file manager for Linux that allows for customized layout, and provides quick access to traditional desktop conveniences as well as common Linux tools.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use Artipie, a PyPI repo

Tue, 12/20/2022 - 16:00
How I use Artipie, a PyPI repo olena Tue, 12/20/2022 - 03:00

While developing with Python as a student, I found that I needed some private centralized storage. This was so I could store binary and text data files, as well as Python packages. I found the answer in Artipie, an open source self-hosted software repository manager.

At university, my colleagues and I conducted research and worked with a lot of data from experimental measurements. I used Python to process and visualize them. My university colleagues at the time were mathematicians and didn't have experience with software development techniques. They usually just passed data and code on a flash drive or over email. My efforts to introduce them to a versioning system like Git were unsuccessful.

Python repository

Artipie supports the PyPI repository, making it compatible with both twine and pip. This means you can work with the Artipie Python repository exactly as you would when installing or publishing packages on the PyPI and TestPyPI repositories.

To create your own Python repository, you can use the hosted instance of Artipie called Artipie Central. Once you sign in, you see a page with your repositories listed (which is empty to begin with) and a form to add a new repository. Choose a name for your new repository (for example, mypython), select "Python" as the repository type, and then click the Add button.

Next, you see a page with repository settings in the YAML format:

---
​repo:
  type: pypi
  storage: default
  permissions:
    olenagerasimova:
     - upload
    "*":
     - download

The type mapping in the configuration sets the repository type. In this example, the Python repository is configured with the default Artipie Central storage.

The storage mapping defines where all of the repository packages are stored. This can be any file system or S3 storage compatible location. Artipie Central has a preconfigured default storage that can be used for tests by anyone.

The permissions mapping allows uploads for the user olenagerasimova, and allows anyone to download any package.

To make sure this repository exists and works, open the index page in your browser. The packages list is displayed. If you've just created a new repository but have yet to upload a package, then the repository index page is blank.

Binary repository

You can store any kind of file in Artipie. The storage type is called file or binary, and I use this as storage for experimental data. I use this as input for Python visualizations. A file repository can be created in Artipie Central the same way as a Python repository. You give it a name, choose the type binary, and then click the Add button.

---
​repo:
  type: file
  storage: default
  permissions:
    olenagerasimova:
     - upload
      - download
    "*":
     - download

The settings are basically the same as for Python. Only the repository type differs. The binary repository, in this example, is called data. It contains three text files with some numbers:

​6
3.5
5
4
4.5
3
2.7
5
6
3
1.2
3.2
6

The other two files take the same form (only the numbers are different.) To see the files yourself, open the links one, two, and three in your browser and download the files, or you can perform a GET request using httpie:

​httpie -a https://central.artipie.com/olenagerasimova/data/y1.dat > ./data/y1.da

These files were uploaded to the Artipie Central data repository with PUT requests:

​httpie -a olenagerasimova:*** PUT https://central.artipie.com/olenagerasimova/data/y1.dat @data/y1.dat

httpie -a olenagerasimova:*** PUT https://central.artipie.com/olenagerasimova/data/y2.dat @data/y2.dat

httpie -a olenagerasimova:*** PUT https://central.artipie.com/olenagerasimova/data/y3.dat @data/y3.dat

As this binary repository API is very simple (HTTP PUT and GET requests), it's easy to write a piece of code in any language to upload and download the required files.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Python project

The source code of an example Python project is available from my GitHub repository. The main idea of the example is to download three data files from Artipie Central, read the numbers into arrays, and use these arrays to draw a plot. Use pip to install the example package and run it:

​$ python3 -m pip install --index-url \
https://central.artipie.com/olenagerasimova/pypi/ \
pypiexample
$ python3 -m pypiexample

By setting the --index-url to the Artipie Central Python repository, pip downloads the packages from it rather than the PyPi repository that serves as the usual default. After running the commands, a polar plot with three curves, a visualization of the data files is displayed.

To publish the package to the Artipie Central repository, build it with and use twine to upload it:

commandline
$ python setup.py sdist bdist_wheel

$ twine upload --repository-url \
https://central.artipie.com/olenagerasimova/pypi
-u olenagerasimova -p *** dist/*

That's how easy it is to set up a files repositories in Artipie Central, create a sample Python project, publish, and install it. You don't have to use Artipie Central, though. Artipie can be self-hosted, so you can run a repository on your own local network.

Run Artipie as a container

Running Artipie as a container makes setup as easy as installing either Podman or Docker. Assuming you have one of these installed, open a terminal:

​$ podman run -it -p 8080:8080 -p 8086:8086 artipie/artipie:latest

This starts a new container running the latest Artipie version. It also maps two ports. Your repositories are served on port 8080. The Artipie Rest API and Swagger documentation are provided on port 8086. A new image generates a default configuration, printing a list of running repositories, test credentials, and a link to the Swagger documentation to your console.

You can also use the Artipie Rest API to see existing repositories:

  1. Go to the Swagger documentation page at http://localhost:8086/api/index-org.html.

  2. In the Select a definition list, choose Auth token

  3. Generate and copy the authentication token for the user artipie with the password artipie

  4. Switch to the Repositories definition and click the Authorize button, and then paste in the token

Image by:

(Seth Kenlon, CC BY-SA 4.0)

 

Perform a GET request for /api/v1/repository/list. In response, you receive a JSON list with three default repositories:

 

​[ "artipie/my-bin", "artipie/my-docker", "artipie/my-maven" ]

The Python repository isn't included in the default configuration. You can correct that by performing a PUT request to /api/v1/repository/{user}/{repo} from the  Swagger interface. In this case, user is the name of the default user (artipie) and repo is the name of the new repository. You can call your new Python repository my-pypi. Here's an example request body, containing a JSON object with the repository settings:

​{ "repo": { "type": "pypi", "storage": "default", "permissions": { "*": [ "download" ], "artipie": [ "upload" ] } } }

All the JSON fields are the same as when you create a repository in the dashboard in YAML format. The type of our repository is pypi, the default storage is used, and anyone can download but only the user artipie can upload.

Make a GET request to /api/v1/repository/list again to make sure your repository was created. Now, you have four repositories:

​[ "artipie/my-bin", "artipie/my-docker", "artipie/my-maven", "artipie/my-pypi" ]

You've created your own Artipie installation, containing several repositories! The Artipie image can run both on a personal computer or on a remote server inside a private network. You can use it to exchange packages within a company, group, or university. It's an easy way to set up your own software services, and it's not just for Python. Take some time to explore Artipie and see what it can make possible for you.

Artipie is an open source self-hosted software repository manager that can be used for much more than just Python.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Python Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Explore the features of the Linux Double Commander file manager

Tue, 12/20/2022 - 16:00
Explore the features of the Linux Double Commander file manager sethkenlon Tue, 12/20/2022 - 03:00

Double Commander is a graphical dual-pane file manager for Linux, in the tradition of Midnight Commander (mc). While Midnight Commander (like the DOS application Norton Commander before it) has its fans, its audience is limited by the fact that it only runs in a terminal window. Not everyone wants to use a "flat" interface embedded in a terminal to browse their file system, and so Double Commander provides a similar interface in a way that feels familiar to many desktop users.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Install Double Commander

To install Double Commander, visit its website and download a package. It's not packaged for a specific Linux distribution, so just download an archive for your CPU architecture.

If you only want to try it out, you can unarchive it and then launch it from your Downloads folder.

To install it permanently, unarchive the package, move it into a location in your path, and then symlink doublecmd to the executable in the source directory:

$ tar xvf doublecmd*tar.xz
$ mv doublecmd ~/.local/bin/doublecmd-X.Y.Z
$ ln -s ~/.local/bin/doublecmd-X.Y.Z/doublecmd ~~/.local/bin/doublecmdHow to start Double Commander

To start Double Commander, use the command doublecmd.

Alternatively, you can add an entry for Double Commander in your application menu. First, create the file ~/.local/share/applications/doublecmd.desktop and enter this text into it:

[Desktop Entry]
Encoding=UTF-8
Name=doublecmd
GenericName=Double Commander
Comment=doublecmd
Exec=../bin/doublecmd
Icon=/usr/share/icons//Adwaita/scalable/apps/system-file-manager-symbolic.svg
Terminal=false
Type=Application
Categories=System;FileTools;Utility;Core;GTK;FileManager;

Now Double Commander appears in your desktop application menu. Note that this does not make Double Commander your default file manager. It only adds it as an application you can launch when you want to.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Two panels

Dual-panel file management is a tradition within a subset of file managers, and to some users it's a little unsettling. If you think about it, though, most file management tasks involve a source location and a destination location. You might be used to a workflow that goes something like this:

  1. Open a file manager and find a file you want to move.

  2. Open another file manager window and navigate to the folder you want to move the file into.

  3. Drag and drop the file from one window to the other.

You might use a variation of this involving, for instance, a right-click to copy combined with some navigation and another right-click to paste. Either way, the ingredients are the same. You locate the source, you locate the destination, and then you make the transfer.

Given that common factor, it makes sense that a file manager like Double Command has a persistent view of the source location and the destination location. At the very least, it saves you from having to open another window.

Double Commander interface

Once you get used to the idea of two concurrent views in your file system, there are a lot more features to discover in Double Commander.

  • Menu bar: At the top of the window is a menu bar. That's pretty standard conceptually, but the menu entries are probably unlike any menu bar you've seen before: File, Mark, Commands, Network, Tabs, and more. These are task-specific menus, which is great because you can ignore an entire submenu you don't use.

  • Toolbar: Under the menu bar, there are buttons for common tasks such as opening a terminal, copying a file, synchronizing two directories, and more.

  • Locations: The location bar is situated just under the toolbar. It lists devices and file system locations, including your boot partition, optical media drive, virtual shared locations, the root directory, your home directory (listed as ~), and more.

  • File list: Most of the Double Commander window is occupied by the dual panel view of your file system.

  • Command: My favorite feature of Double Commander is the single command field below the file list pane. This allows you to enter an arbitrary command to run within the active pane. This is great for the odd command you need to run in a directory that no file manager expects you to run, and so no file manager has a function for. It's the brute force method of the plugin model: Provide a command line and let users run what they need to run whenever they need to run it.

  • Functions: Along the very bottom of the Double Commander window, as with Midnight Commander, there's a list of common functions, each assigned to a Function key on your keyboard.

Using Double Commander

Using Double Commander is a lot like using any file manager, except that Double Commander is focused on groups of actions. For instance, the File menu isn't an obligatory entry with just New Window and New Tab, it's full of useful functions, like creating a symlink or hard link, changing attributes, comparing contents, bulk renaming, splitting and combining files, and more. Double Commander is direct. It gets straight to the point, serving as a stand-in for all the commands you'd normally run in a terminal.

Graphical command interface

More than any other file manager I've seen, Double Commander feels like it's meant to be a graphical interface for commands. You can map almost everything in its interface to a command or series of commands you're used to running in a terminal.

Of course, the question then is whether you need a graphical command line. Why not just run the commands in a terminal? Interestingly, I had the opportunity to witness the value of this recently. There are times, as a support person for other computer users, when trying to get a user to navigate the terminal can be overwhelming. This is particularly true when your user is texting on an app on their mobile phone, and you're giving them commands to type into a terminal on their desktop. This introduces several opportunities for mistakes, and what was meant to be "the fast way" of doing something ends up taking an hour.

It's counter-intuitive to a terminal user, and it's not even always true, but there are times when a graphical interface really is easier to give instructions for. Picture it: A zombie apocalypse rages outside your compound, and the file permissions of a vital file need to be changed in order to activate the firewall. "Open a terminal and type chmod a+x /usr/local/bin/foo…​no, that's ch as in change, mod as in mode but without the e…​no, and then a space. Not between the ch and the mod, just after the mod. And then a space. It's chmod and then a space. Not the word space, just press the spacebar. It's the really long key under your thumb…​"

Or you could just say this: "Click on the file, now with that selected, go to the File menu up at the top and click on Change Attributes…​"

Double Command's central feature is in its powerful features disguised as a non-threatening graphical file manager. Download and try it out for yourself.

Double Commander is a graphical dual-pane file manager for Linux

Image by:

opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use Rexx for scripting in 2023

Mon, 12/19/2022 - 16:00
Use Rexx for scripting in 2023 howtech Mon, 12/19/2022 - 03:00

In a previous article, I showed how the Rexx scripting language is both powerful and easy to use. It uses specific techniques to reconcile these two goals that are often considered in conflict.

This article walks you through two example Rexx scripts so you can get a feel for the language. Rexx purports to be highly capable yet easy to work with.

An example of a Rexx script

The LISP programming language is famous for its overuse of parentheses. It can be a real challenge for programmers to ensure they're all matched up correctly.

This short script reads a line of LISP code from the user and determines whether the parentheses in the input are properly matched. If the parentheses aren't properly balanced, the program displays a syntax error.

Below are three sample interactions with the program. In the first, the LISP code I entered was correctly typed. But the next two contain mismatched parentheses that the Rexx script identifies:

Enter a line to analyze: (SECOND (LAMBDA (LIS) (FIRST (CDR LIS)) )) Parentheses are balanced Enter a line to analyze: ((EQSTR (CAR LIS1) (CAR LIS2)) Syntax error: too many left parens, not balanced Enter a line to analyze: (EQSTR (CAR LIS1) CAR LIS2)) Syntax error: right paren before or without left paren

Here's the Rexx program:

counter = 0 /* counts parentheses */ say 'Enter a line to analyze:' /* prompts user for input */ pull input_string /* reads line of user input */ length_of_string = length(input_string) /* process each character of the input line, one at a time */ do j = 1 to length_of_string while counter >= 0 character = substr(input_string,j,1) if character = '(' then counter = counter + 1 if character = ')' then counter = counter - 1 end /* display the appropriate message to the user */ if counter = 0 then say 'Parentheses are balanced' else if counter < 0 then say 'Syntax error: right paren before or without left paren' else say 'Syntax error: too many left parens, not balanced'

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

First, the program prompts the user to enter a line of input with the say instruction. Then it reads it with a pull instruction.

The say and pull  instructions are used for conversational input/output, or direct interaction with users. Rexx also supports character-oriented and line- or record- oriented I/O.

Next, the script uses the length function to place the length of the input line into the variable length_of_string.

The do loop processes each character from the input line, one at a time. It increments the counter each time it encounters a left parenthesis, and decrements it each time it recognizes a right parenthesis.

If the counter ends up as zero after processing the entire input line, the program knows that any parentheses in the input line match up correctly. If the counter is not 0 after processing, the input line has mismatched parentheses.

The final if statements display the proper message to the user. One could code these if statements in any number of styles, as per individual preference. (The main requirement is that whenever multiple statements are coded within a branch, they must be enclosed in a do...end group.)

This program shows that Rexx is free-form and case-insensitive. It does not rely on reserved words, so you're free to use common words like counter or character to represent variables.

The one key requirement Rexx does impose is that any function must immediately be followed by a left parenthesis. Examples in the program are the length and substr functions. Put a space between a function name and its following parenthesis, and Rexx won't recognize the function.

Outside of a few minimal requirements like these, Rexx requires very little from the programmer in terms of syntax, special characters, or restrictive coding rules.

Rexx programs look and read like pseudo-code. This makes them relatively easy to read and work with.

A real-world example of a Rexx script

Here's a program from the real world:

Les Koehler, a Rexx user, had a legacy accounting program that matched accounting records on hand against those that a vendor sent to him daily. The legacy program ran for several hours every day to process 25,000 records. It employed a sequential "walk the list" technique to match one set of records against the other.

Les replaced the legacy program with a Rexx script. The Rexx script performs matching by using associative arrays:

/* Create an associative array reflecting */ /* the values in the first list of names */ /* by Les Koehler */ flag. = 0 /* Create array, set all items to 0 */ do a = 1 to list_a.0 /* Process all existing records */ aa = strip(list_a.a) /* Strip preceding/trailing blanks */ flag.aa = 1 /* Mark each record with a 1 */ end /* Try to match names in the second list */ /* against those in the associative array */ m = 0 /* M counts number of missing names */ do b = 1 to list_b.0 /* Look for matching name from LIST_B */ bb = strip(list_b.b) /* Put LIST_B name into variable BB */ if \ flag.bb then do /* If name isn't in FLAG array */ m = m+1 /* add 1 to count of missing names */ missing.m = bb /* add missing name to MISSING array */ end end missing.0 = m /* Save the count of unmatched names */

Les was able to reduce processing time from several hours down to a matter of seconds.

The first line of code (flag. = 0) creates a new array called flag and initializes every element in that array to 0.

The array list_a contains all the existing accounting records. Its first element (list_a.0) by convention contains the number of elements in the array.

So the first do loop processes all elements in the array of existing records (list_a) and marks each of them as existing in the flag array. The statement flag.aa = 1 marks the content-addressable item in the flag array as present.

The second do loop peddles through each item in the set of new records, contained in the array called list_b.

The if statement checks whether an item from the second array of records is marked present in the flag array. If not, the program increments the number of items present in the new list of accounting records that do not exist in the old list of records. And it puts the missing item into the missing array: missing.m = bb.

The final statement (missing.0 = m) simply updates the number of items in the missing array, by convention stored in array position 0.

Rexx improvements

Why is this Rexx program so fast compared to the legacy code it replaces? First, the associative arrays allow direct lookup of a new record against the old records. Direct access is much faster than the sequential "walk-the-list" technique it replaced.

Secondly, all the array elements reside in memory. Once the files of the old and new accounting records have been initialized into the Rexx arrays, no further disk I/O is needed. Disk I/O is always orders of magnitude slower than memory access.

A Rexx array expands as much as memory allows. This script takes advantage of modern computers with seemingly endless amounts of RAM, and frees the programmer from managing memory.

Conclusion

I hope these two simple programs have shown how easy Rexx is to read, write, and maintain. Rexx is designed to put the burden of programming on the machine instead of the programmer. Yet the language still has plenty of power, due to the design techniques I've described in this series of articles.

For free Rexx downloads, tools, tutorials, and more, visit RexxInfo.org. You can join the Rexx Language Association for free.

This article is dedicated to the memory of Les Koehler, who was active with Rexx and the Rexx community since their very earliest days.

Here are two simple programs to show how easy Rexx is to read, write, and maintain.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use my old camera as a webcam with Linux

Mon, 12/19/2022 - 16:00
How I use my old camera as a webcam with Linux tomoliver Mon, 12/19/2022 - 03:00

This year after largely abandoning my MacBook in favor of a NixOS machine, I started getting requests to "turn my camera on" when video calling people. This was a problem because I didn't have a webcam. I thought about buying one, but then I realized I had a perfectly good Canon EOS Rebel XS DSLR from 2008 lying around on my shelf. This camera has a mini-USB port, so naturally, I pondered: Did a DSLR, mini-USB port, and a desktop PC mean I could have a webcam?

There's just one problem. My Canon EOS Rebel XS isn't capable of recording video. It can take some nice pictures, but that's about it. So that's the end of that.

Or is it?

There happens to be some amazing open source software called gphoto2. Once installed, it allows you to control various supported cameras from your computer and it takes photos and videos.

Supported cameras

First, find out whether yours is supported:

$ gphoto2 --list-camerasCapture an image

You can take a picture with it:

$ gphoto2 --capture-image-and-download

The shutter activates, and the image is saved to your current working directory.

Capture video

I sensed the potential here, so despite the aforementioned lack of video functionality on my camera, I decided to try gphoto2 --capture-movie. Somehow, although my camera does not support video natively, gphoto2 still manages to spit out an MJPEG file!

On my camera, I need to put it in "live-view" mode before gphoto2 records video. This consists of setting the camera to portrait mode and then pressing the Set button so that the viewfinder is off and the camera screen displays an image. Unfortunately, though, this isn't enough to be able to use it as a webcam. It still needs to get assigned a video device, such as /dev/video0.

Install ffmpeg and v4l2loopback

Not surprisingly, there's an open source solution to this problem. First, use your package manager to install gphoto2, ffmpeg, and mpv. For example, on Fedora, CentOS, Mageia, and similar:

$ sudo dnf install gphoto2 ffmpeg mpv

On Debian, Linux Mint, and similar:

$ sudo apt install gphoto2 ffmpeg mpv

I use NixOS, so here's my configuration:

# configuration.nix
...
environment.systemPackages = with pkgs; [
  ffmpeg
  gphoto2
  mpv
...

Creating a virtual video device requires the v4l2loopback Linux kernel module. At the time of this writing, that capability is not included in the mainline kernel, so you must download and compile it yourself:

$ git clone https://github.com/umlaeute/v4l2loopback

$ cd v4l2loopback

$ make

$ sudo make install

$ sudo depmod -a

If you're using NixOS like me, you can just add the extra module package in configuration.nix:

[...]
boot.extraModulePackages = with config.boot.kernelPackages;
[ v4l2loopback.out ];
boot.kernelModules = [
  "v4l2loopback"
];
boot.extraModprobeConfig = ''
  options v4l2loopback exclusive_caps=1 card_label="Virtual Camera"
'';
[...]

On NixOS, run sudo nixos-rebuild switch and then reboot.

Create a video device

Assuming your computer currently has no /dev/video device, you can create one on demand thanks to the v4l2loopback.

Run this command to send data from gphoto2 to ffmpeg, using a device such as /dev/video0 device:

$ gphoto2 --stdout --capture-movie |
 ffmpeg -i - -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

You get output like this:

ffmpeg version 4.4.1 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 11.3.0 (GCC)
  configuration: --disable-static ...
  libavutil      56. 70.100 / 56. 70.100
  libavcodec     58.134.100 / 58.134.100
  libavformat    58. 76.100 / 58. 76.100
  libavdevice    58. 13.100 / 58. 13.100
  libavfilter     7.110.100 /  7.110.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc    55.  9.100 / 55.  9.100
Capturing preview frames as movie to 'stdout'. Press Ctrl-C to abort.
[mjpeg @ 0x1dd0380] Format mjpeg detected only with low score of 25, misdetection possible!
Input #0, mjpeg, from 'pipe:':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: mjpeg (Baseline), yuvj422p(pc, bt470bg/unknown/unknown), 768x512 ...
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> rawvideo (native))
[swscaler @ 0x1e27340] deprecated pixel format used, make sure you did set range correctly
Output #0, video4linux2,v4l2, to '/dev/video0':
  Metadata:
    encoder         : Lavf58.76.100
  Stream #0:0: Video: rawvideo (I420 / 0x30323449) ...
    Metadata:
      encoder         : Lavc58.134.100 rawvideo
frame=  289 fps= 23 q=-0.0 size=N/A time=00:00:11.56 bitrate=N/A speed=0.907x

To see the video feed from your webcam, use mpv:

$ mpv av://v4l2:/dev/video0 --profile=low-latency --untimed Image by:

(Tom Oliver, CC BY-SA 4.0)

Start your webcam automatically

It's a bit annoying to execute a command every time you want to use your webcam. Luckily, you can run this command automatically at startup. I implement it as a systemd service:

# configuration.nix
...
  systemd.services.webcam = {
    enable = true;
    script = ''
      ${pkgs.gphoto2}/bin/gphoto2 --stdout --capture-movie |
        ${pkgs.ffmpeg}/bin/ffmpeg -i - \
            -vcodec rawvideo -pix_fmt yuv420p -f v4l2  /dev/video0
    '';
wantedBy = [ "multi-user.target" ];
  };

...

On NixOS, run sudo nixos-rebuild switch and then reboot your computer. Your webcam is on and active.

To check for any problems, you can use systemctl status webcam. This tells you the last time the service was run and provides a log of its previous output. It's useful for debugging.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Iterating to make it better

It's tempting to stop here. However, considering the current global crises, it may be pertinent to wonder whether it's necessary to have a webcam on all the time. It strikes me as sub-optimal for two reasons:

  1. It's a waste of electricity.
  2. There are privacy concerns associated with this kind of thing.

My camera has a lens cap, so to be honest, the second point doesn't really bother me. I can always put the lens cap on when I'm not using the webcam. However, leaving a big power-hungry DSLR camera on all day (not to mention the CPU overhead required for decoding the video) isn't doing anything for my electricity bill.

The ideal scenario:

  • I leave my camera plugged in to my computer all the time but switched off.
  • When I want to use the webcam, I switch on the camera with its power button.
  • My computer detects the camera and starts the systemd service.
  • After finishing with the webcam, I switch it off again.

To achieve this, you need to use a custom udev rule.

A udev rule tells your computer to perform a certain task when it discovers that a device has become available. This could be an external hard drive or even a non-USB device. In this case, you need it to recognize the camera through its USB connection.

First, specify what command to run when the udev rule is triggered. You can do that as a shell script (systemctl restart webcam should work). I run NixOS, so I just create a derivation (a Nix package) that restarts the systemd service:

# start-webcam.nix
with import <nixpkgs> { };

writeShellScriptBin "start-webcam" ''
  systemctl restart webcam
  # debugging example
  # echo "hello" &> /home/tom/myfile.txt
  # If myfile.txt gets created then we know the udev rule has triggered properly
''

Next, actually define the udev rule. Find the device and vendor ID of the camera. Do this by using the lsusb command. That command is likely already installed on your distribution, but I don't use it often, so I just install it as needed using nix-shell:

$ nix-shell -p usbutils

Whether you already have it on your computer or you've just installed it, run lsusb:

$ lsusb
Bus 002 Device 008: ID 04a9:317b Canon, Inc. Canon Digital Camera
[...]

In this output, the vendor ID is 04a9 and the device ID is 317b. That's enough to create the udev rule:

ACTION=="add", SUBSYSTEM=="usb",
ATTR{idVendor}=="04a9",
ATTR{idProduct}=="317b",
RUN+="/usr/local/bin/start-webcam.sh"

Alternatively, if you're using NixOS:

# configuration.nix
[...]
let
  startWebcam = import ./start-webcam.nix;
[...]
services.udev.extraRules = ''
  ACTION=="add",  \
  SUBSYSTEM=="usb", \
  ATTR{idVendor}=="04a9", \
  ATTR{idProduct}=="317b",  \
  RUN+="${startWebcam}/bin/start-webcam"
'';
[...]

Finally, remove the wantedBy = ["multi-user.target"]; line in your start-webcam systemd service. (If you leave it, then the service starts automatically when you next reboot, whether the camera is switched on or not.)

Reuse old technology

I hope this article has made you think twice before chucking some of your old tech. Linux can breathe life back into technology, whether it's your computer or something simple like a digital camera or some other peripheral.

I gave my old DSLR camera new life with gphoto2 by turning it into a webcam for my Linux computer.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Linux Hardware What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Discover the power of the Linux SpaceFM file manager

Mon, 12/19/2022 - 16:00
Discover the power of the Linux SpaceFM file manager sethkenlon Mon, 12/19/2022 - 03:00

SpaceFM is a tabbed file manager for Linux using the GTK toolkit, so it fits right in on desktops like GNOME, Mate, Cinnamon, and others. SpaceFM also features a built-in device manager system, so it's particularly good for window managers, like Fluxbox or fvwm, which typically don't include a graphical device manager. If you're happy with the file managers on Linux, but you want to try one that's a little bit different in design, SpaceFM is worth a look.

Install SpaceFM

On Linux, you're likely to find SpaceFM in your distribution's software repository. On Fedora, Mageia, OpenMandriva, and similar:

$ sudo dnf install spacefm

On Debian and Debian-based systems:

$ sudo apt install spacefmPanels

I don't know why SpaceFM is called SpaceFM, but it could be because it makes a concerted effort to let you use every bit of space in its window for something useful. By default, SpaceFM is actually pretty simple, standard-issue file manager. It has a single panel listing your files, a toolbar, and a menu bar.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

All the "usual" rules apply.

  • Double-click to open a directory or to open a file in its default application.

  • Right-click for a contextual menu providing lots of standard options (copy, paste, rename, view properties, create a new folder, and so on).

The way SpaceFM sets itself apart, though, is its panel system. SpaceFM displays one panel by default. That's the big file window listing your files. But it can have up to four panel views, plus a few bonus panels for some specific tasks.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Opening a new panel

Instead of seeing one directory in your file manager, you can see two. To bring up another directory in its own pane, press Ctrl+2 or go to the View menu and select Panel 2. Alternatively, click the second green dot icon from the left in the menu panel.

With two panels, you can move files from one directory to another without opening a new file manager window, or you can browse two directories to compare their contents.

But why settle for two panels? Maybe you'd rather see three directories at once. To bring up a third directory in a dedicated pane, press Ctrl+3 or go to the View menu and select Panel 3. Alternatively, click the third green dot icon from the left in the menu panel. This panel appears at the bottom of the SpaceFM window.

With three panels open, you can move files between several directories, or sort files from a common "dumping ground" (like your Desktop or Downloads folder) into specific directories.

Of course, once you've tried three panels you'll probably find yourself itching for a fourth. To open a fourth directory in its own pane, press Ctrl+4 or go to the View menu and select Panel 4. Alternatively, click the fourth green dot icon from the left in the menu panel. This one opens next to Panel 3, splitting your SpaceFM window into even quarters.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

What about a fifth panel? Well, actually SpaceFM stops at four panels. If you really do want a fifth panel, you have to open a new SpaceFM window. However, there are still more panels, used for information other than file listings, to explore.

Special panels

The View menu reveals that in addition to file panels, there are additionally task-specific panels you can choose to display. This includes:

  • Task manager: Lists ongoing file manager processes. This isn't a general-purpose task manager, so to set nice values or detect a zombie apocalypse of undead PIDs, htop or top is still your utility of choice.

  • Bookmarks: Links to common folders, such as Desktop, Documents, Downloads, and any location you want to keep handy.

  • Devices: USB thumb drives and remote file systems.

  • File tree: A view of your file system in order of directory inheritance.

These panels open on the left side of SpaceFM, but they do stack. You can have bookmarks, devices, tasks, and a file tree open at once, although it helps to have a very tall SpaceFM window.

Make space for SpaceFM

SpaceFM is a configurable multi-tasking file manager. It maximizes the information you can build into a single window, and it lets you decide what's important, and when. This article has focused on the panels of SpaceFM because those are, at least in my view, the most unique aspect of the application. However, there's a lot more to SpaceFM, including plugins, preferences, a design mode, keyboard shortcuts, and customization. This isn't a small application, even though it is a lightweight one. Spend some time with SpaceFM, because you never know what you'll discover.

If you're happy with the file managers on Linux, but you want to try one that's a little bit different in design, SpaceFM is worth a look.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use my Groovy color wheel calculator

Sun, 12/18/2022 - 16:00
Use my Groovy color wheel calculator clhermansen Sun, 12/18/2022 - 03:00

Every so often, I find myself needing to calculate complementary colors. For example, I might be making a line graph in a web app or bar graphs for a report. When this happens, I want to use complementary colors to have the maximum "visual difference" between the lines or bars.

Online calculators can be useful in calculating two or maybe three complementary colors, but sometimes I need a lot more–for instance, maybe 10 or 15.

Many online resources explain how to do this and offer formulas, but I think it's high time for a Groovy color calculator. So please follow along. First, you might need to install Java and Groovy.

Install Java and Groovy

Groovy is based on Java and requires a Java installation as well. Both a recent/decent version of Java and Groovy might be in your Linux distribution's repositories. Or you can install Groovy by following the instructions on the above link.

A nice alternative for Linux users is SDKMan, which can get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using SDK's releases of:

  • Java: version 11.0.12-open of OpenJDK 11
  • Groovy: version 3.0.8
Using a color wheel

Before you start coding, look at a real color wheel. If you open GIMP (the GNU Image Manipulation Program) and look on the upper left-hand part of the screen, you'll see the controls to set the foreground and background colors, circled in red on the image below:

Image by:

(Chris Hermansen, CC BY-SA 4.0)

If you click on the upper left square (the foreground color), a window will open that looks like this:

Image by:

(Chris Hermansen, CC BY-SA 4.0)

More on Java What is enterprise Java programming? An open source alternative to Oracle JDK Java cheat sheet Red Hat build of OpenJDK Free online course: Developing cloud-native applications with microservices Fresh Java articles

If it doesn't quite look like that, click on the fourth from the left button on the top left row, which looks like a circle with a triangle inscribed in it.

The ring around the triangle represents a nearly continuous range of colors. In the image above, starting from the triangle pointer (the black line that interrupts the circle on the left), the colors shade from blue into cyan into green, yellow, orange, red, magenta, violet, and back to blue. This is the color wheel. If you pick two colors opposite each other on that wheel, you will have two complementary colors. If you choose 17 colors evenly spaced around that wheel, you'll have 17 colors that are as distinct as possible.

Make sure you have selected the HSV button in the top right of the window, then look at the sliders marked H, S, and V, respectively. These are hue, saturation, and value. When choosing contrasting colors, the hue is the interesting parameter.

Its value runs from zero to 360 degrees; in the image above, it's 192.9 degrees.

You can use this color wheel to calculate the complementary color to another manually–just add 180 to your color's value, giving you 372.9. Next, subtract 360, leaving 17.9 degrees. Type that 17.9 into the H box, replacing the 192.9, and poof, you have its complementary color:

Image by:

(Chris Hermansen, CC BY-SA 4.0)

If you inspect the text box labeled HTML notation you'll see that the color you started with was #0080a3, and its complement is #a33100. Look at the fields marked Current and Old to see the two colors complementing each other.

There is a most excellent and detailed article on Wikipedia explaining HSL (hue, saturation, and lightness) and HSV (hue, saturation, and value) color models and how to convert between them and the RGB standard most of us know.

I'll automate this in Groovy. Because you might want to use this in various ways, create a Color class that provides constructors to create an instance of Color and then several methods to query the color of the instance in HSV and RGB.

Here's the Color class, with an explanation following:

     1  /**
     2   *  This class based on the color transformation calculations
     3   *  in https://en.wikipedia.org/wiki/HSL_and_HSV
     4   *
     5   *  Once an instance of Color is created, it can be transformed
     6   *  between RGB triplets and HSV triplets and converted to and
     7   *  from hex codes.
     8   */
       
     9  public class Color {
       
    10      /**
    11       * May as well keep the color as both RGB and HSL triplets
    12       * Keep each component as double to avoid as many rounding
    13       * errors as possible.
    14       */
       
    15      private final Map rgb // keys 'r','g','b'; values 0-1,0-1,0-1 double
    16      private final Map hsv // keys 'h','s','v'; values 0-360,0-1,0-1 double
       
    17      /**
    18       * If constructor provided a single int, treat it as a 24-bit RGB representation
    19       * Throw exception if not a reasonable unsigned 24 bit value
    20       */
       
    21      public Color(int color) {
    22          if (color < 0 || color > 0xffffff) {
    23              throw new IllegalArgumentException('color value must be between 0x000000 and 0xffffff')
    24          } else {
    25              this.rgb = [r: ((color & 0xff0000) >> 16) / 255d, g: ((color & 0x00ff00) >> 8) / 255d, b: (color & 0x0000ff) / 255d]
    26              this.hsv = rgb2hsv(this.rgb)
    27          }
    28      }
       
    29      /**
    30       * If constructor provided a Map, treat it as:
    31       * - RGB if map keys are 'r','g','b'
    32       *   - Integer and in range 0-255 ⇒ scale
    33       *   - Double and in range 0-1 ⇒ use as is
    34       * - HSV if map keys are 'h','s','v'
    35       *   - Integer and in range 0-360,0-100,0-100 ⇒ scale
    36       *   - Double and in range 0-360,0-1,0-1 ⇒ use as is
    37       * Throw exception if not according to above
    38       */
       
    39      public Color(Map triplet) {
    40          def keySet = triplet.keySet()
    41          def types = triplet.values().collect { it.class }
    42          if (keySet == ['r','g','b'] as Set) {
    43              def minV = triplet.min { it.value }.value
    44              def maxV = triplet.max { it.value }.value
    45              if (types == [Integer,Integer,Integer] && 0 <= minV && maxV <= 255) {
    46                  this.rgb = [r: triplet.r / 255d, g: triplet.g / 255d, b: triplet.b / 255d]
    47                  this.hsv = rgb2hsv(this.rgb)
    48              } else if (types == [Double,Double,Double] && 0d <= minV && maxV <= 1d) {
    49                  this.rgb = triplet
    50                  this.hsv = rgb2hsv(this.rgb)
    51              } else {
    52                  throw new IllegalArgumentException('rgb triplet must have integer values between (0,0,0) and (255,255,255) or double values between (0,0,0) and (1,1,1)')
    53              }
    54          } else if (keySet == ['h','s','v'] as Set) {
    55              if (types == [Integer,Integer,Integer] && 0 <= triplet.h && triplet.h <= 360
    56              && 0 <= triplet.s && triplet.s <= 100 && 0 <= triplet.v && triplet.v <= 100) {
    57                  this.hsv = [h: triplet.h as Double, s: triplet.s / 100d, v: triplet.v / 100d]
    58                  this.rgb = hsv2rgb(this.hsv)
    59              } else if (types == [Double,Double,Double] && 0d <= triplet.h && triplet.h <= 360d
    60              && 0d <= triplet.s && triplet.s <= 1d && 0d <= triplet.v && triplet.v <= 1d) {
    61                  this.hsv = triplet
    62                  this.rgb = hsv2rgb(this.hsv)
    63              } else {
    64                  throw new IllegalArgumentException('hsv triplet must have integer values between (0,0,0) and (360,100,100) or double values between (0,0,0) and (360,1,1)')
    65              }
    66          } else {
    67              throw new IllegalArgumentException('triplet must be a map with keys r,g,b or h,s,v')
    68          }
    69      }
       
    70      /**
    71       * Get the color representation as a 24 bit integer which can be
    72       * rendered in hex in the familiar HTML form.
    73       */
       
    74      public int getHex() {
    75          (Math.round(this.rgb.r * 255d) << 16) +
    76          (Math.round(this.rgb.g * 255d) << 8) +
    77          Math.round(this.rgb.b * 255d)
    78      }
       
    79      /**
    80       * Get the color representation as a map with keys r,g,b
    81       * and the corresponding double values in the range 0-1
    82       */
       
    83      public Map getRgb() {
    84          this.rgb
    85      }
       
    86      /**
    87       * Get the color representation as a map with keys r,g,b
    88       * and the corresponding int values in the range 0-255
    89       */
       
    90      public Map getRgbI() {
    91          this.rgb.collectEntries {k, v -> [(k): Math.round(v*255d)]}
    92      }
       
    93      /**
    94       * Get the color representation as a map with keys h,s,v
    95       * and the corresponding double values in the ranges 0-360,0-1,0-1
    96       */
       
    97      public Map getHsv() {
    98          this.hsv
    99      }
       
   100      /**
   101       * Get the color representation as a map with keys h,s,v
   102       * and the corresponding int values in the ranges 0-360,0-100,0-100
   103       */
       
   104      public Map getHsvI() {
   105          [h: Math.round(this.hsv.h), s: Math.round(this.hsv.s*100d), v: Math.round(this.hsv.v*100d)]
   106      }
       
   107      /**
   108       * Internal routine to convert an RGB triple to an HSV triple
   109       * Follows the Wikipedia section https://en.wikipedia.org/wiki/HSL_and_HSV#Hue_and_chroma
   110       * (almost) - note that the algorithm given there does not adjust H for G < B
   111       */
       
   112      private static def rgb2hsv(Map rgbTriplet) {
   113          def max = rgbTriplet.max { it.value }
   114          def min = rgbTriplet.min { it.value }
   115          double c = max.value - min.value
   116          if (c) {
   117              double h
   118              switch (max.key) {
   119              case 'r': h = ((60d * (rgbTriplet.g - rgbTriplet.b) / c) + 360d) % 360d; break
   120              case 'g': h = ((60d * (rgbTriplet.b - rgbTriplet.r) / c) + 120d) % 360d; break
   121              case 'b': h = ((60d * (rgbTriplet.r - rgbTriplet.g) / c) + 240d) % 360d; break
   122              }
   123              double v = max.value // hexcone model
   124              double s = max.value ? c / max.value : 0d
   125              [h: h, s: s, v: v]
   126          } else {
   127              [h: 0d, s: 0d, v: 0d]
   128          }
   129      }
       
   130      /**
   131       * Internal routine to convert an HSV triple to an RGB triple
   132       * Follows the Wikipedia section https://en.wikipedia.org/wiki/HSL_and_HSV#HSV_to_RGB
   133       */
       
   134      private static def hsv2rgb(Map hsvTriplet) {
   135          double c = hsvTriplet.v * hsvTriplet.s
   136          double hp = hsvTriplet.h / 60d
   137          double x = c * (1d - Math.abs(hp % 2d - 1d))
   138          double m = hsvTriplet.v - c
   139          if (hp < 1d)      [r: c  + m, g: x  + m, b: 0d + m]
   140          else if (hp < 2d) [r: x  + m, g: c  + m, b: 0d + m]
   141          else if (hp < 3d) [r: 0d + m, g: c  + m, b: x  + m]
   142          else if (hp < 4d) [r: 0d + m, g: x  + m, b: c  + m]
   143          else if (hp < 5d) [r: x  + m, g: 0d + m, b: c  + m]
   144          else if (hp < 6d) [r: c  + m, g: 0d + m, b: x  + m]
   145      }
       
   146  }

The Color class definition, which begins on line 9 and ends on line 146, looks a lot like a Java class definition (at first glance, anyway) that would do the same thing. But this is Groovy, so you have no imports up at the beginning, just comments. Plus, the details illustrate some more Groovyness.

Line 15 creates the private final variable rgb that contains the color value supplied to the class constructor. You'll keep this value as Map with keys r, g, and b to access the RGB values. Keep the values as double values between 0 and 1 so that 0 would indicate a hexadecimal value of #00 or an integer value of 0 and 1 would mean a hexadecimal value of #ff or an integer value of 255. Use double to avoid accumulating rounding errors when converting inside the class.

Similarly, line 16 creates the private final variable hsv that contains the same color value but in HSV format–also a Map, but with keys h, s, and v to access the HSV values, which will be kept as double values between 0 and 360 (hue) and 0 and 1 (saturation and value).

Lines 21-28 define a Color constructor to be called when passing in an int argument. For example, you might use this as:

def blue = new Color(0x0000ff)
  • On lines 22-23, check to make sure the argument passed to the constructor is in the allowable range for a 24-bit integer RGB constructor, and throw an exception if not.
  • On line 25, initialize the rgb private variable as the desired RGB Map, using bit shifts and dividing each by a double value 255 to scale the numbers between 0 and 1.
  • On line 26, convert the RGB triplet to HSV and assign it to the hsv private variable.

Lines 39-69 define another Color constructor to be called when passing in either an RGB or HSV triple as a Map. You might use this as:

def green = new Color([r: 0, g: 255, b: 0])

or

def cyan = new Color([h: 180, s: 100, v: 100])

Or similarly with double values scaled between 0 and 1 instead of integers between 0 and 255 in the RGB case and between 0 and 360, 0 and 1, and 0 and 1 for hue, saturation, and value, respectively.

This constructor looks complicated, and in a way, it is. It checks the keySet() of the map argument to decide whether it denotes an RGB or HSV tuple. It checks the class of the values passed in to determine whether the values are to be interpreted as integers or double values and, therefore, whether they are scaled into 0-1 (or 0-360 for hue).

Arguments that can't be sorted out using this checking are deemed incorrect, and an exception is thrown.

Worth noting is the handy streamlining provided by Groovy:

def types = triplet.values().collect { it.class }

This uses the values() method on the map to get the values as a List and then the collect() method on that List to get the class of each value so that they can later be checked against [Integer,Integer,Integer] or [Double,Double,Double] to ensure that arguments meet expectations.

Here is another useful streamlining provided by Groovy:

def minV = triplet.min { it.value }.value

The min() method is defined on Map; it iterates over the Map and returns the MapEntry—a (key, value) pair—having the minimum value encountered. The .value on the end selects the value field from that MapEntry, which gives something to check against later to determine whether the values need to be normalized.

Both rely on the Groovy Closure, similar to a Java lambda–a kind of anonymous procedure defined where it is called. For example, collect() takes a single Closure argument and passes it to each MapEntry encountered, known as the parameter within the closure body. Also, the various implementations of the Groovy Collection interface, including here Map, define the collect() and min() methods that iterate over the elements of the Collection and call the Closure argument. Finally, the syntax of Groovy supports compact and low-ceremony invocations of these various features.

Lines 70-106 define five "getters" that return the color used to create the instance in one of five formats:

  1. getHex() returns an int corresponding to a 24-bit HTML RGB color.
  2. getRgb() returns a Map with keys r, g, b and corresponding double values in the range 0-1.
  3. getRgbI() returns a Map with keys r, g, b and corresponding int values in the range 0-255.
  4. getHsv() returns a Map with keys h, s, v and corresponding double values in the range 0-360, 0-1 and 0-1, respectively.
  5. getHsvI() returns a Map with keys h, s, v and corresponding int values in the range 0-360, 0-100 and 0-100, respectively.

Lines 112-129 define a static private (internal) method rgb2hsv() that converts an RGB triplet to an HSV triplet. This follows the algorithm described in the Wikipedia article section on Hue and chroma, except that the algorithm there yields negative hue values when the green value is less than the blue value, so the version is modified slightly. This code isn't particularly Groovy other than using the max() and min() Map methods and returning a Map instance declaratively without a return statement.

This method is used by the two getter methods to return the Color instance value in the correct form. Since it doesn't refer to any instance fields, it is static.

Similarly, lines 134-145 define another private (internal) method hsv2rgb(), that converts an HSV triplet to an RGB triplet, following the algorithm described in the Wikipedia article section on HSV to RGB conversion. The constructor uses this method to convert HSV triple arguments into RGB triples. Since it doesn't refer to any instance fields, it is static.

That's it. Here's an example of how to use this class:

     1  def favBlue = new Color(0x0080a3)
       
     2  def favBlueRgb = favBlue.rgb
     3  def favBlueHsv = favBlue.hsv
       
     4  println "favBlue hex = ${sprintf('0x%06x',favBlue.hex)}"
     5  println "favBlue rgbt = ${favBlue.rgb}"
     6  println "favBlue hsvt = ${favBlue.hsv}"
       
     7  int spokeCount = 8
     8  double dd = 360d / spokeCount
     9  double d = favBlue.hsv.h
    10  for (int spoke = 0; spoke < spokeCount; spoke++) {
    11      def color = new Color(h: d, s: favBlue.hsv.s, v: favBlue.hsv.v)
    12      println "spoke $spoke $d° hsv ${color.hsv}"
    13      println "    hex ${sprintf('0x%06x',color.hex)} hsvI ${color.hsvI} rgbI ${color.rgbI}"
    14      d = (d + dd) % 360d
    15  }

As my starting value, I've chosen the lighter blue from the opensource.com header #0080a3, and I'm printing a set of seven more colors that give maximum separation from the original blue. I call each position going around the color wheel a spoke and compute its position in degrees in the variable d, which is incremented each time through the loop by the number of degrees dd between each spoke.

As long as Color.groovy and this test script are in the same directory, you can compile and run them as follows:

$ groovy test1Color.groovy
favBlue hex = 0x0080a3
favBlue rgbt = [r:0.0, g:0.5019607843137255, b:0.6392156862745098]
favBlue hsvt = [h:192.88343558282207, s:1.0, v:0.6392156862745098]
spoke 0 192.88343558282207° hsv [h:192.88343558282207, s:1.0, v:0.6392156862745098]
    hex 0x0080a3 hsvI [h:193, s:100, v:64] rgbI [r:0, g:128, b:163]
spoke 1 237.88343558282207° hsv [h:237.88343558282207, s:1.0, v:0.6392156862745098]
    hex 0x0006a3 hsvI [h:238, s:100, v:64] rgbI [r:0, g:6, b:163]
spoke 2 282.8834355828221° hsv [h:282.8834355828221, s:1.0, v:0.6392156862745098]
    hex 0x7500a3 hsvI [h:283, s:100, v:64] rgbI [r:117, g:0, b:163]
spoke 3 327.8834355828221° hsv [h:327.8834355828221, s:1.0, v:0.6392156862745098]
    hex 0xa30057 hsvI [h:328, s:100, v:64] rgbI [r:163, g:0, b:87]
spoke 4 12.883435582822074° hsv [h:12.883435582822074, s:1.0, v:0.6392156862745098]
    hex 0xa32300 hsvI [h:13, s:100, v:64] rgbI [r:163, g:35, b:0]
spoke 5 57.883435582822074° hsv [h:57.883435582822074, s:1.0, v:0.6392156862745098]
    hex 0xa39d00 hsvI [h:58, s:100, v:64] rgbI [r:163, g:157, b:0]
spoke 6 102.88343558282207° hsv [h:102.88343558282207, s:1.0, v:0.6392156862745098]
    hex 0x2fa300 hsvI [h:103, s:100, v:64] rgbI [r:47, g:163, b:0]
spoke 7 147.88343558282207° hsv [h:147.88343558282207, s:1.0, v:0.6392156862745098]
    hex 0x00a34c hsvI [h:148, s:100, v:64] rgbI [r:0, g:163, b:76]

You can see the degree position of the spokes reflected in the HSV triple. I've also printed the hex RGB value and the int version of the RGB and HSV triples.

I could have built this in Java. Had I done so, I probably would have created separate RgbTriple and HsvTriple helper classes because Java doesn't provide the declarative syntax for Map. That would have made finding the min and max values more verbose. So, as usual, the Java would have been more lengthy without improving readability. There would have been three constructors, though, which might be a more straightforward proposition.

I could have used 0-1 for the hue as I did for saturation and value, but somehow I like 0-360 better.

Finally, I could have added–and I may still do so one day–other conversions, such as HSL.

Wrap up

Color wheels are useful in many situations and building one in Groovy is a great exercise to learn both how the wheel works and the, well, grooviness of Groovy. Take your time; the code above is long. However, you can build your own practical color calculator and learn a lot along the way.

Groovy resources

The Apache Groovy language site provides a good tutorial-level overview of working with Collection, particularly Map classes. This documentation is quite concise and easy to follow, at least partly because the facility it is documenting has been designed to be itself concise and easy to use!

Color wheels are useful in many situations and building one in Groovy is a great exercise to learn both how the wheel works and the, well, grooviness of Groovy.

Image by:

Lisa Padilla. Modified by Opensource.com. CC BY-SA 4.0

Java Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Try this Python-based file manager on Linux

Sun, 12/18/2022 - 16:00
Try this Python-based file manager on Linux sethkenlon Sun, 12/18/2022 - 03:00

Dragonfly Navigator is a general-purpose file manager written in Python and Qt. It's easy to install, easy to use, and a great example of what Python can do.

Python is a popular language for several reasons, but I think one of its primary strengths is that it's equally useful to beginner-level programmers and to experienced coders. There's something exciting about a language you can take from drawing basic geometric shapes to scraping the web to programming a zombie apocalypse video game, or writing desktop applications you can use every day. And that's what Dragonfly Navigator is: a desktop utility that everyone can use.

Installing Dragonfly Navigator

To install Dragonfly Navigator, first download the source code from its Git repository. If you're on Debian Linux or similar, download the .deb file. If you're using Fedora, CentOS, Mageia, OpenMandriva, or similar, then download the .tar.gz file.

Dragonfly Navigator has a few dependencies. Because you aren't installing it through your package manager, it's up to you to resolve those. There are just two, so use your package manager (dnf or apt) to find and install them:

  • PyQt5, also called python-qt5

  • Python PIL, also called pillow

Launching Dragonfly Navigator

To launch Dragonfly Navigator, either install the .deb file (on Debian-based systems) or unarchive the .tar.gz file:

$ tar xvf dragonfly*gz

On Debian-based systems, Dragonfly Navigator appears in your application menu. ON other systems, you must launch it manually unless you manually install it.

For now, I'm not installing it, so I launch it manually:

$ cd dragonfly
$ ./dragonfly Image by:

(Seth Kenlon, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Dual pane

Dragonfly Navigator is a two-panel file manager, meaning that it's always showing you two directories. At launch, both directories happen to be your home directory. You can browse through files and folders in either panel. They function exactly the same, and it only matters which panel you're "in" when you start copying or moving files.

Open a directory

To open a directory, double-click it. By default, the directory opens in that same pane. If you want to utilize the two-panel layout, though, hold down the Ctrl key as you double-click to display its contents in the other panel.

Open a file

To open a file, double-click or right-click on it.

Yes, you can right-click a file to open it. That takes some getting used to, if you're used to a right-click bringing up a contextual menu. There is no contextual menu in Dragonfly Navigator, though, and you might be surprised at how much time you feel like you're saving yourself when you reduce the very common action of opening a file to just one click. It may seem silly now, but trust me you'll grow to cherish it.

Quick preview

Some files are available for a quick preview so you don't have to open them in any particular application. To preview a file, hover your mouse over it and press the Alt key on your keyboard. A preview appears in the opposite panel.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Copying and moving files

To copy or move a file from one directory to another (or a directory to a directory), there are a few steps.

  1. In one panel, navigate to the destination directory. This is the location you want to copy a file to.

  2. In the other panel, select the file you want to copy.

  3. Click the Copy button in the middle strip of Dragonfly Navigator.

For moving a file, follow the same steps but click the Move button instead.

If you're not used to a dual-panel file manager, this feels unfamiliar at first. But if you think about it, there are several steps required to copy a file in your usual file manager (find the file, open another window, drag-and-drop, and so on.) After you do it a few times, it becomes second nature.

Selecting files

Normally, you click a file or folder to make it your active selection. That's probably no different than your current file manager, or at least to some file manager you've used in the past.

To select multiple items in a range, click one file, and then hold the Shift key and click another file. All items between the two files you clicked are also selected.

To select multiple arbitrary files, hold the Ctrl key and click on the files you want selected.

The power of Qt and Python

The Qt toolkit is a powerful programming utility, and Python is capable of creating great applications with it. I've only covered the basics of Dragonfly Navigator in this article, so download it, read the docs, click around, explore it, and maybe you'll have found a fun new file manager.

Dragonfly Navigator is a general-purpose file manager written in Python and Qt.

Image by:

Yuko Honda on Flickr. CC BY-SA 2.0

Linux Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I wrote an open source video game for Open Jam in a weekend

Sat, 12/17/2022 - 16:00
How I wrote an open source video game for Open Jam in a weekend Jim Hall Sat, 12/17/2022 - 03:00

Every year, Itch.io hosts Open Jam, a game jam where developers build an open source video game over a weekend. This year's Open Jam ran from October 28th to October 31st.

Open Jam is a friendly competition with no prizes, which makes it a great opportunity to try new things, experiment with a new game idea, or learn a new programming language. While projects don't necessarily need to be built with open source tools, the game submission needs to have an open source license. Entries in Open Jam get "karma" or bonus points for how open source the game is, such as how many open source tools were used to create it or running on an open source operating system.

Each Open Jam has a specific theme, and this year's theme was "Light in the Darkness." It's up to each developer to interpret how to apply that theme to their own game. I entered the Open Jam with a game called the Toy CPU, a simulation of a simple computer that you program using "switches and lights," similar to an old-style Altair 8800 or IMSAI 8080.

Image by:

(Jim Hall, CC BY-SA 4.0)

The Toy CPU did well in the competition, ranking second out of the six entries submitted to Open Jam. While voting was light this year, it was still pretty cool to see the game do so well.

Writing the game over a weekend for Open Jam was a lot of fun. Looking back on the experience, I wanted to share three lessons about how to write a game in such a short time. These lessons apply to developing any kind of program, not just games.

Manage the scope

A few days isn't a lot of time to write a new game. To be successful, you need to be realistic about how much you can really accomplish in that limited time. What features you can realistically implement will affect the design and goals of the game.

Keep things simple to manage the scope. A narrow focus will help you to complete the game by the deadline. Avoid the temptation to add new features until you've completed the original goals.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications It takes planning

Open Jam doesn't announce the theme ahead of time so you need to wait until Open Jam starts before you can figure out what you want to do. But when the goals are made clear, take some time out to consider what you want to do, and what the goals should be.

This is the same for any project. Before you can map out a plan, you need to know where you're going. The goals of the project help define what you need to do to get there.

Experiment with prototypes

The Open Jam was not the first time I'd written the Toy CPU, although it was the first time I'd written a complete version of it. I have an interest in retrocomputing, and several months ago I wrote a prototype of a similar computer, although it lacked the ability to enter a program using the "lights and switches" model.

This rough prototype was enough to inform me how I might write a more complete Toy CPU — and in fact, I later updated the prototype to run on Linux using ncurses. This "version 2" prototype helped me to figure out how a user might interact with the Toy CPU to enter a program using the "lights and switches."

I planned to rewrite the Toy CPU as a graphics-mode FreeDOS program, but never found the time. When Open Jam announced the "Light in the Darkness" theme, I realized this was a perfect opportunity to rewrite a completely new version of the Toy CPU, building on what I'd learned during prototyping.

You can find the complete source code to the Toy CPU on my GitHub repository. The Toy CPU is open source software under the MIT license.

I made a video game inspired by old-style Altair 8800 for this year's Open Jam.

Image by:

Cicada Strange on Flickr, CC BY-SA 2.0

Gaming What to read next A guide to building a video game with Python This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Travel back in time with the mc file manager on Linux

Sat, 12/17/2022 - 16:00
Travel back in time with the mc file manager on Linux sethkenlon Sat, 12/17/2022 - 03:00

In the late 1980s and throughout the 1990s, there was a popular file manager for DOS called Norton Commander. It was beloved by many computer users of the day, but it fell out of favor as graphical file managers became the default. Fortunately for fans of the original commander, and those who missed out on the original, an open source file manager with a similar design was released, called Midnight Commander or, more commonly, just mc.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

The mc file manager exists in a terminal, but it feels like a modern keyboard-driven application with intuitive actions and easy navigation. It starts with an efficient design. Most file management tasks involve a source location and a destination, so it makes sense that your file manager has a persistent view of one location where your files are now and another location where you want your files to be. If you try it for a while, you do start to wonder why that's not the default configuration of every file manager, especially when you consider how much wasted horizontal space there often is in the typical file listing.

3 essential commands for the mc file manager

There are only three things you need to know to get started with mc:

  • Tab switches between panels.

  • Arrows do what you think they do. Up and Down selects, Left goes back. The Right descends into the selected folder.

  • Ctrl+O (that's the letter "o", not the number zero) toggles between the mc interface and a full terminal.

Like GNU Nano, all the most common actions for mc are listed at the bottom of the terminal window. Each action is assigned to a Function key (F1 to F10,) and any action you perform applies to whatever you have currently selected in your active pane.

Using mc

Launch mc from a terminal:

$ mc

Your terminal is now the mc interface, and by default, it lists the contents of your current directory.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Open a file

One of the reasons you use a file manager is to find a file and then open it. Your desktop already has default applications set, and mc inherits these preferences (or most of them), so press Return to open a file in its default application. There are exceptions to mc's behavior when opening a file. For instance, a text file doesn't open by default in a graphical text editor, because mc instead expects you to use its internal editor (F4) instead. Images and videos and other binary files, however, default to your desktop settings.

Should you need to open a file in something other than its default application, press F2 and select Do something on the current file (or just press @) and type in the name of the application you prefer to launch.

For instance, say you have a file called zombie-apocalypse.txt and you want to edit it specifically in Emacs:

  1. Use the arrow keys to select zombie-apocalypse.txt
  2. Press F2 and then @
  3. Type emacs

You don't have to specify which file you want to open in Emacs, because mc runs the command you type on the file you have selected.

Copy or move a file

To copy or move a file, select it from the file list and press the F5 key. By default, mc prompts you to copy (or move) your active selection from to the location shown in the non-active panel. A dialogue box is provided, though, so you can manually enter either the source or the destination if you change your mind after starting the operation.

Selecting files

Your current position in a file list is also your current and active selection. To select more than one file at a time, press the Shift key and move your selection up or down the files you want to include in your selection. Items in your selection are indicated with a color different from the other files listed. What color mc uses depends on your color scheme.

You can deselect just one file from the middle of a selected block by moving to that item and pressing Shift and Up or Down.

Menu

There are just ten actions listed at the bottom of the mc interface, but it can do a lot more than that. Press F9 to activate the top menu, using the arrow keys to navigate each menu. From the File menu, for instance, you can create symlinks, change file modes and permissions, create new directories, and more.

Additionally, you can press F2 on any selection for a contextual menu, allowing you to create compressed archives, append a file to another one, view man pages, copy files to a remote host, and more.

Cancel an action

When you find yourself backed into a corner and in need of a panic button, use the Esc key.

Install mc

On Linux, you're likely to find mc in your Linux distribution's software repository. On Fedora, CentOS, Mageia, OpenMandriva, and similar:

$ sudo dnf install mc

On Debian and Debian-based systems:

$ sudo apt install mc

On macOS, use Homebrew or MacPort.

Take mc for spin. You might discover a new favorite way to use your Linux terminal!

The Midnight Commander file manager exists in a Linux terminal, but it feels like a modern keyboard-driven application with intuitive actions and easy navigation.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A new generation of tools for open source vulnerability management

Fri, 12/16/2022 - 16:00
A new generation of tools for open source vulnerability management vdanen Fri, 12/16/2022 - 03:00

Product security incident response teams (PSIRTs) are teams of security professionals that work diligently behind the scenes to protect software products and services of companies. A PSIRT is a different breed than a computer security incident response team (CSIRT), those that tend to be called Information Security. The difference is simple but stark: a CSIRT focuses on responding to incidents that affect a company's infrastructure, data or users. A PSIRT focuses on responding to incidents that affect products a company builds, the most common being the discovery of a vulnerability or security defect, and subsequent actions to manage or remediate.

Tools for PSIRT

I've been a part of a PSIRT for over 20 years, first as the leader of Mandriva's PSIRT (although we didn't call it that then) and currently for Red Hat. While its changing somewhat today, there were never that many tools for a PSIRT to use, compared to the plethora of tools available to CSIRTs. Sure we have static code analysis (SCA), static application security testing (SAST), and dynamic application security testing (DAST) tools to identify known and unknown vulnerabilities in our products. But there was never a great way to manage the data around those vulnerabilities so most PSIRTs rely on homegrown tooling or piggyback things onto existing tools that weren't meant for that use.

For example, when I started at Red Hat nearly 14 years ago, I used the Bugzilla instance directly to track and file bugs and vulnerability information. Back then it was quite simple – there was a CVE bug which contained the details of a vulnerability and then created children bugs for product teams to track to remediation. This worked when we only had to worry about OpenShift and the JBoss Enterprise Application Platform. As we began to develop and support more products, we found doing this manually didn't scale with such a small team. We wrote a series of scripts in Python, fondly referred to as security flaw manager (SFM), that manipulated Bugzilla through API calls to create bugs, add comments, and other metadata. This occurred when a flaw was reported, made public, impact and scoring metrics, what products were affected, and other useful data. None of which were properly supported by the bug tracking systems, and were instead stuffed in other fields in custom formats, prone to human meddling. While rudimentary, these scripts did what we needed them to do, for a time. But as we wanted to collect more metadata, and had increasingly more products to support, SFM felt a little long in the tooth. After all, who wants to do all of this work on the command line?

A number of years ago we endeavored to create a new tool. We developed SFM2, which was a single web-based application that did what SFM did and more. It had better search, which helped with the ever-growing number of CVEs we had to track and deal with. It provided better checks for quality, ensuring we didn't miss anything as dealing with more vulnerabilities and more products became ever more complicated. We knew this was something that other PSIRTs might be interested in and for some time we held out hope to modularize it and make it open and available. But it was still bound to specific Bugzilla customizations. This made it difficult or impossible for anyone (other than us) to use it.

More on Microservices Microservices cheat sheet How to explain microservices to your CEO Free eBook: Microservices vs. service-oriented architecture Free online course: Developing cloud-native applications with microservices arc… Latest microservices articles The evolution of SFM2

This was quite frustrating as we had effectively developed innersource, a term coined by Tim O'Reilly over two decades ago. Everything was written with open languages and built in an open source way. But we couldn't share it and no one could benefit from our work, nor could we benefit from others experience and input. We knew there were other companies out there dealing with more complexity in their products and, now, managed or hosted services. As a leader in managing open source vulnerabilities, our team had some excellent tooling we couldn't share with anyone due to how we had inadvertently allowed feature creep and ties to custom tooling to get in the way.

So last year we took a look again at the problem. SFM2 was not designed in a way that allowed us to maintain it well, and there were some other deficiencies that we needed to correct — but we had hit a wall. We needed different capabilities, and the tooling was designed for a very specific way of working that needed to change for efficiency and scale. And using Bugzilla as a backend database, which worked well enough a decade ago, was no longer ideal. In fact it was the single biggest hindrance we had.

What we needed was not a monolithic application but a set of smaller services that worked well together using APIs. The way I explained it when we were conceptualizing this a year ago was the difference between the sendmail and qmail email servers. Sendmail was a single monolithic application that did everything, whereas qmail had different services where output from one was passed as input to another, and each service was distinct and unique enough to make it easier to maintain. This was after all, a key part of the original UNIX philosophy, something that many of us who've been doing this for quite a while, still hold in high esteem.

As a result, we set out to build four primary applications: a flaw database that would store all of the vulnerability information (replacing Bugzilla as our backend), a frontend to that database to make it easy to add and update information, a registry of components that could be used as a manifest of all our products and services so we could easily find where any given component might live, and finally a license scanner to ensure we met our open source license compliance requirements. One of the core design principles was to have the primary method of interaction be via APIs such that we could write a frontend that no one was obligated to use (if an end-user was authorized, they could recreate the SFM scripts of yore to interact with the flaw information via the command line). But more importantly, the services could be integrated with other existing tooling directly, using standardized and open data interchange formats, rather than manual duplication of metadata from one platform to another.

Further, another core principle that had to be adhered to was that these tools needed to be developed in the open. We did this for a few reasons. One, we wanted others to be able to use and contribute to these tools. Second, it enforced a certain amount of rigor — we couldn't design these tools for our own use exclusively, so no more innersource.

With the experience and lessons learned moving through not just one but two generations of tooling to support open source vulnerability management, we're pretty sure we chose the right path forward. Yet, we're humble enough to know that others may have different needs and hence the invitation to join us to develop these tools. Nearly everyone, from large enterprise open source producers, to the pizza shop down the street with their web and mobile applications, are software developers. So there's a need for tools to manage vulnerabilities beyond homegrown ones, spreadsheets, hacked up add-ons to software or services not designed to handle a PSIRT process. There are a lot of tools for CSIRTs and developers, but not that many tools for incident response and coordination.

If you're interested in looking at or using any of these tools, we invite you to collaborate with us through GitHub. While we have been working on these for a while, we have only worked on three of the four tools to-date. The fourth, the frontend to the flaw database, or the service layer that operates between these services, is yet to be started.

  • Component Registry: which is used to store all of the component information across any number of products and services

  • OSIDB: the Open Security Issue Database, is the database to store all vulnerability data

  • OpenLCS: the Open License and Crypto Scanner, is the tool to obtain license and cryptography information from shipped components

Product security incident response teams require a unique set of tools for the discovery and remediation of a vulnerability or security defect. Open source is the solution.

Image by:

Opensource.com

Innersource Security and privacy Microservices What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 reasons to love Linux GNOME Files

Fri, 12/16/2022 - 16:00
5 reasons to love Linux GNOME Files sethkenlon Fri, 12/16/2022 - 03:00

The GNOME desktop is a common default desktop for most Linux distributions and, as with most operating systems, you manage your data on GNOME with software called a file manager. GNOME promotes a simple and clear naming scheme for its applications, and so its file manager is called, simply, Files. Its intuitive interface is simple enough that you forget what operating system you're using altogether. You're just using a computer, managing files in the most obvious way. GNOME Files is a shining example of thoughtful, human-centric design, and it's an integral part of modern computing. These are my top five favorite things about GNOME Files, and why I love using it.

1. Intuitive design Image by:

(Seth Kenlon, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

As long as you've managed files on a computer before, you basically already know how to use GNOME Files. Sure, everybody loves innovation, and everybody loves seeing new ideas that make the computer world a little more exciting. However, there's a time and a place for everything, and frankly sometimes the familiar just feels better. Good file management is like breathing. It's something you do without thinking about what you're doing. When it becomes difficult for any reason, it's disruptive and uncomfortable.

GNOME Files doesn't have any surprises in store for you, at least not the kind that make you stop what you thought you were doing in order to recalculate and start again. And my favorite aspect of the "do it the way you think you should do it" design of GNOME Files is that there isn't only one way to accomplish a task. One thing I've learned from teaching people how to do things on computers is that everyone seems to have a slightly different workflow for even the simplest of tasks, so it's a relief that GNOME Files accounts for that.

When you need to move a file, do you open a second window so you can drag and drop between the two? Or do you right-click and Cut the file and then navigate to the destination and Paste the file? Or do you drag the file onto a button or folder icon, blazing a trail through directories as they open for you? In GNOME Files, the "standard" assumptions usually apply (insofar as there are standard assumptions.)

2. Space saver

If you manage a lot of files for a lot of the time you're at your computer, you're probably familiar with just how much screen real estate a file manager can take up. Many file managers have lots of buttons across several toolbars, a menu bar, and a status bar, such that just one file manager window takes up a good portion of your screen. To make matters worse, many users prefer to open several folders, each in its own window, which takes even more space.

GNOME Files tends to optimize space. What takes up three separate toolbars in other file managers is in a single toolbar in GNOME Files, and that toolbar is what would traditionally be the window title bar. In the top bar, there's a forward and back button, file path information, a view settings button, and a drop-down menu with access to common functions.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

3. Other locations

Not all operating systems or file managers make it so you can interact with your network as naturally as you can interact with your own computer. Linux has a long tradition of viewing the network as just another computer, and in fact, the name "GNOME" was an acronym for "GNU Network Object Model Environment."

In GNOME Files, it's trivial to open a folder on a computer you're not sitting in front of. Whether it's a server in a data center or just your office desktop while you're relaxing in your lounge with a laptop, the Other Locations bookmark in the GNOME Files side panel allows you to access files as if they were on your hard drive.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

To use it, you enter the file sharing protocol you want to use, along with the username and IP address of the computer you want to access. The ssh:// protocol is most common between Linux or Unix machines, while smb:// is useful for an environment with Windows machines, and dav:// is useful for applications running on the Internet. Assuming the target computer is accessible over the protocol you're using, and that its firewall is set correctly to permit you to pass through it, you can interact with a remote system as naturally as though they were on your local machine.

4. Preferences

Most file managers have configuration options, and to be fair GNOME Files actually doesn't give you very many choices compared to others. However, the options that it does offer are, like the modes of working it offers its users, the "standard" ones. I'm misusing the word "standard" intentionally: There is no standard, and what feels standard to one person is niche to someone else. But if you like what you're experiencing with GNOME Files under normal circumstances, and you feel that you're its intended audience, then the configuration options it offers are in line with the experience it promotes. For example:

  • Sort folders before files

  • Expand folders in list view

  • Show the Create link option in the contextual menu

  • Show the Delete Permanently option in the contextual menu

  • Adjust visible information beneath a filename in icon view

That's nearly all the options you're given, and in a way it's surface-level choices. But that's GNOME Files. If you want something with more options, there are several very good alternatives that may better fit your style of work. If you're looking for a file manager that just covers the most common use cases, then try GNOME Files.

5. It's full of stars

I love the concept of metadata, and I generally hate the way it's not implemented. Metadata has the potential to be hugely useful in a pragmatic way, but it's usually relegated to specialized metadata editing applications, hidden from view and out of reach. GNOME Files humbly contributes to improving this situation with one simple feature: The gold star.

In GNOME Files, you can star any file or folder. It's a bit of metadata so simple that it's almost silly to call it metadata, but in my experience, it makes a world of difference. Instead of desperately running find command to filter files by recent changes, or re-sorting a folder by modification time, or using grep to find that one string I just know is in an important file, I can star the files that are important to me.

Making plans for the zombie apocalypse all day? Star it so you can find it tomorrow when you resume your important work. After it's over and the brain-eaters have been dealt with, un-star the folder and resume normal operation. It's simple. Maybe too simple for some. But I'm a heavy star-user, and it saves me several methods of searching and instead reduces "what was I working on?" to the click of a single button.

Install GNOME Files

If you've downloaded a mainstream Linux distribution, then chances are good that you already have GNOME and GNOME Files installed. However, not all distributions default to GNOME, and even those that do often have different desktops available for download. The development name of GNOME Files is nautilus, so to find out whether you have GNOME Files installed, open a terminal and type nautilus & and then press Return. If you see this error, you don't have GNOME Files available:

bash: nautilus: command not found...

To install GNOME Files, you must install the GNOME desktop. If you're happy with your current desktop, though, that's probably not what you want to do. Instead, consider trying PCManFM or Thunar.

If you're interested in GNOME, though, this is a great reason to try it. You can probably install GNOME from your distribution's repository or software center.

GNOME Files is the name of the file manager for the GNOME desktop. Its intuitive design is just one of the many reasons I love to use it.

Image by:

Gunnar Wortmann via Pixabay. Modified by Opensource.com. CC BY-SA 4.0.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why Drupal is the future of content strategy

Thu, 12/15/2022 - 16:00
Why Drupal is the future of content strategy Suzanne Dergacheva Thu, 12/15/2022 - 03:00

As a long-time advocate for open source and a contributor to Drupal, I spend a lot of time thinking about how organizations can leverage the platform. I've been thinking about Drupal's position in the larger digital ecosystem and how it compares to the other options on the market. And how Drupal can lean more on its strengths when planning out the strategy for a new project.

How Drupal fits into the digital landscape

In 2022, putting a site online can be as fast as the time it takes to make a coffee. This is made possible because websites have many similar features and there's usually no need to build one from scratch. When I started my career, frameworks like Ruby on Rails were appealing because of their flexibility. But I quickly learned that the lack of standard solutions for common things like multilingual content, media management, and workflows, meant that each project required a huge investment in custom development.

On the other hand, web builders that have emerged over the last ten years, like Wix and Squarespace offer the dream of "drag-and-drop" website construction and customizable templates. But in reality, their flexibility is very surface-level. They don't offer enough flexibility to build a solid content model, create an experience, or provide the level of content compliance that large organizations need.

This is where Drupal stands out, providing both powerful functionality out-of-the-box, and the tools to build out custom functionality on top of that content.

Drupal, the content management system

When I started using Drupal 15 years ago, it was described as a content management system. And it is, as it gives content editors the power to log in and manage content, rather than relying on a webmaster or a web developer to do it.

But there was also the promise that site builders could update not just the content, but the content model. Site builders could extend Drupal using configuration instead of writing code. This set it apart from the frameworks that were out at the time. From years of teaching people Drupal, I can tell you that there's a certain amount of joy and empowerment that people get when they realize how much they can do through the Drupal admin UI.

At its core, this is still Drupal's strength. You can control not just the content, but how content is organized. The fact that taxonomy and localization are baked into Drupal's content model, gives a huge advantage over other systems that have a more limited concept of content.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Drupal, the platform

Shortly after adopting Drupal as our agency's technology of choice, I started calling it a platform. As an ambitious 20-something, I was keen to build more than nice-looking content-rich websites. The ambition was to create more powerful tools to organize the flow of information. This includes integrating Drupal with other systems to build functionality and workflows around your content. You can also create content synchronizations between a CRM and Drupal. Finally, you can search interfaces that allow you to search diverse content sources and filter content in new ways.

The fact that Drupal is so adaptable to these architectures distinguishes it immediately from other CMSs. When talking to large organizations, teams of developers or IT leaders see the benefit of using a technology that is so flexible and adaptable to functional needs.

Drupal, the digital experience platform

While these attributes are still very compelling, Drupal is now referred to as a digital experience platform (DXP). Its main difference from the proprietary DXPs of the world is that it's open. It doesn't ship with a stack of integrated technologies but rather lets you decide what your stack will be. Whether it's for marketing integrations or multi-channel experiences, you can decide how content feeds into and out of Drupal. This flexibility is one of Drupal's strengths. But it can be a challenge when pitching Drupal against other DXPs that come with a complete marketing toolset.

Marketing folks often look for a packaged solution. And while an agency can package Drupal with a stack of tools, it's hard for Drupal to market this type of ready-to-go solution.

Drupal's strength as a content strategy platform

So how does Drupal position itself when talking to marketers? Drupal's core strength is still its flexible content architecture. This means that it's an ideal platform for implementing a content strategy and content governance plan. These are two things that plenty of organizations are missing. They are also the two reasons for marketers to adopt a platform like Drupal.

Better content strategy with Drupal

While Drupal can already be adapted to the content strategy of any organization, it doesn't mean that every Drupal website has a strong content strategy. Drupal implementers have to proactively make choices that prioritize the needs of content and content editors. This means doing things like:

  • Organizing content around user needs, not organizational structure

  • Structuring content to be reusable, adaptable, personalized, translatable

  • Integrating content with digital services by making content available via API

  • Setting up tools so that content compliance is checked systematically

Meanwhile, beyond the website, organizations need to use best practices to prioritize their content strategy practice. This means:

  • Empowering communicators and treating content editors as first-class users

  • Sharing best practices for web publishing across the organization

  • Creating a clear, actionable content governance plan

  • Using tools like the digital asset management (DAM) tool that fosters content governance

  • Creating a smooth flow of content and feedback between content experts and users

With new expectations of platforms to handle personalization and faster cycles for re-branding or implementing a completely new marketing strategy, it's more important than ever for your website to be a tool to help your content strategy. If you're looking for ways to orient your practice around a strong content strategy, here are some places to start:

  • Get content editors involved in the process when launching a new web project

  • Build documentation that's driven by content needs, not just technology. Use real content examples in your documentation and talk about the "why" of the content.

  • Prioritize ongoing content governance rather than just relying on big projects to revamp your content every 3-5 years

  • Invest in cleaning up legacy content instead of migrating content as-is when you do invest in a website redesign

  • Invest in the content editor experience, something that Drupal facilitates and continues to invest in, but still takes active effort to do for each project

To sum up, Drupal is already a CMS and a DXP. But this is beside the point. There is a need to leverage Drupal's capabilities towards creating a strong content strategy to really get the most out of the platform.

This article is based on the author's talk at DrupalCon Portland: Future of content management: using Drupal as a content strategy platform.

Drupal is already a robust content management system and digital experience platform. It's also playing a critical role in content strategy.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Drupal Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Improve your documentation with JavaScript

Thu, 12/15/2022 - 16:00
Improve your documentation with JavaScript Jim Hall Thu, 12/15/2022 - 03:00

Open source software projects often have a very diverse user group. Some users might be very adept at using the system and need very little documentation. For these power users, documentation might only need to be reminders and hints, and can include more technical information such as commands to run at the shell. But other users may be beginners. These users need more help in setting up the system and learning how to use it.

Writing documentation that suits both user groups can be daunting. The website's documentation needs to somehow balance "detailed technical information" with "providing more overview and guidance." This is a difficult balance to find. If your documentation can't meet both user groups, consider a third option — dynamic documentation.

Explore how to add a little JavaScript to a web page so the user can choose to display just the information they want to see.

Structure your content

You can use the example where documentation needs to suit both expert and novice users. In the simplest case, you can use a made-up music player called AwesomeProject.

You can write a short installation document in HTML that provides instructions for both experts and novices by using the class feature in HTML. For example, you can define a paragraph intended for experts by using:

<p class="expert reader">

This assigns both the expert class and the reader class. You can create a parallel set of instructions for novices using:

<p class="novice reader">

The complete HTML file includes both paragraphs for novice readers and experts:



<html lang="en">

<head>

<title>How to install the software</title>
</head>

<body>

<h1>How to install the software</h1>

<p>Thanks for installing AwesomeProject! With AwesomeProject,
you can manage your music collection like a wizard.</p>

<p>But first, we need to install it:</p>

<p class="expert reader">You can install AwesomeProject from
source. Download the tar file, extract it, then run:
<code>./configure ; make ; make install</code></p>

<p class="novice reader">AwesomeProject is available in
most Linux distributions. Check your graphical package manager and search for AwesomeProject to install it.</p>

</body>

</html>

This sample HTML document doesn't have a stylesheet associated with it, so viewing this in a web browser shows both paragraphs:

Image by:

(Jim Hall, CC BY-SA 4.0)

We can apply some basic styling to the document to highlight any element with the reader, expert, or novice classes. To make the different text classes easier to differentiate, let's set the reader class to an off-white background color, expert to a dark red text color, and novice to a dark blue text color:



<html lang="en">

<head>

<title>How to install the software</title>

<style>

.reader {
background-color: ghostwhite;
}

.expert {
color: darkred;
}

.novice {
color: darkblue;
}

</style>

</head>

<body>

<h1>How to install the software</h1>

These styles help the two sections stand out when you view the page in a web browser. Both paragraphs with the installation instructions have an off-white background color because they both have the reader class. The first paragraph uses dark red text, as defined by the expert class. The second installation paragraph is in dark blue text, from the novice class:

Image by:

(Jim Hall, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Add JavaScript controls

With these classes applied, you can add a short JavaScript function that shows just one of the content blocks. One way to write this function is to first set display:none to all of the elements with the reader class. This hides the content so it won't display on the page. Then the function should set display:block to each of the elements with the class you want to display:

<script>
function readerview(audience) {
  var list, item;
  // hide all class="reader"
  list = document.getElementsByClassName("reader");
  for (item = 0; item < list.length; item++) {
    list[item].style.display = "none";
  }
  // show all class=audience
  list = document.getElementsByClassName(audience);
  for (item = 0; item < list.length; item++) {
    list[item].style.display = "block";
  }
}
script>

To use this JavaScript in the HTML document, you can attach the function to a button. Since the readerview function takes an audience as its parameter, you can call the function with the audience class that you want to view, either novice or expert:



<html lang="en">

<head>

<title>How to install the software</title>

<style>

.reader {
background-color: ghostwhite;
}

.expert {
color: darkred;
}

.novice {
color: darkblue;
}

</style>

</head>

<body>

<script>

function readerview(audience) {
  var list, item;

  // hide all class="reader"

  list = document.getElementsByClassName("reader");

  for (item = 0; item < list.length; item++) {

   list[item].style.display = "none";
 }

 // show all class=audience

 list = document.getElementsByClassName(audience);

 for (item = 0; item < list.length; item++) {

   list[item].style.display = "block";
 }

}

</script>

<h1>How to install the software</h1>

<nav>

<button onclick="readerview('novice')">view novice text</button>

<button onclick="readerview('expert')">view expert text</button>

</nav>

<p>Thanks for installing AwesomeProject! With AwesomeProject,
you can manage your music collection like a wizard.</p>

<p>But first, we need to install it:</p>
<p class="expert reader">You can install AwesomeProject from
source. Download the tar file, extract it, then run
<code>./configure ; make ; make install</code></p>

<p class="novice reader">AwesomeProject is available in
most Linux distributions. Check your graphical package
manager and search for AwesomeProject to install it.</p>

</body>

</html>

 

With these controls in place, the web page now allows the user to select the text they want to see:

 

Image by:

(Jim Hall, CC BY-SA 4.0)

Clicking either button will show just the text the user wants to read. For example, if you click the “view novice text” button, then you'll see just the blue paragraph:

Image by:

(Jim Hall, CC BY-SA 4.0)

Clicking the “view expert text” button hides the novice text and shows only the expert text in red:

Image by:

(Jim Hall, CC BY-SA 4.0)

Extend this to your documentation

If your project requires you to write multiple how-to documents for different audiences, consider using this method to publish once and read twice. Writing a single document for all your users makes it easy for everyone to find and share the documentation for your project. And you won't have to maintain parallel documentation that varies in just the details.

Make your open source project documentation dynamic so it appeals to users of all experience levels.

Image by:

Original photo by Marco Tedaldi. Modified by Rikki Endsley. CC BY-SA 2.0.

Documentation JavaScript What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Enjoy two-panel file management on Linux with far2l

Thu, 12/15/2022 - 16:00
Enjoy two-panel file management on Linux with far2l Seth Kenlon Thu, 12/15/2022 - 03:00

Far2l is a port of the Windows text-based file manager Far. And to be clear, that's a lower-case L (as in "Linux") not a number 1. It runs in the terminal and is designed around a plug-in structure, enabling compatibility with SSH, WebDAV, NFS, and more. You can compile and run far2l on Linux, Mac, and BSD, or Far on Windows.

Install far2l

Far2l is currently in beta, so you're unlikely to find it in your Linux distribution's software repository. However, you can compile it from source by downloading cloning its Git repository:

$ git clone --depth 1 https://github.com/elfmz/far2l.git

You can browse through the source code to see all of its different components. The main source files are in utils/src:

SharedResource.cpp
StackSerializer.cpp
StringConfig.cpp
StrPrintf.cpp
TestPath.cpp
Threaded.cpp
ThreadedWorkQueue.cpp
TimeUtils.cpp
TTYRawMode.cpp
utils.cpp
WideMB.cpp
ZombieControl.cpp

The file ZombieControl.cpp works to mitigate a zombie apocalypse (at least, in terms of processes), the file ThreadedWorkQueue.cpp helps speed processes along by using threading. Far2l isn't just built for extensibility, it's built responsibly!

Assuming you've already prepared your system for compiling code, as described in the compiling from source article, you must also install some development libraries required by far2l. On Fedora, CentOS, OpenMandriva, and Mageia, the minimal list is:

  • wxGTK3-devel

  • spdlog-devel

  • xerces-c-devel

  • uchardet-devel (your repository may not have this one, but there's a workaround)

On Debian, the minimal list is:

  • libwxgtk3.0-gtk3-dev

  • libuchardet-dev

  • libspdlog-dev

  • libxerces-c-dev

Use CMake to prepare the makefiles:

$ mkdir build
$ cd !$
$ cmake .. -DUSEUCD=no

The -DUSECD=no option is required only if you don't have the development libraries for chardet installed. If you do, then you can omit that option.

Finally, compile the code and install far2l to a temporary location:

$ make -j$(nproc --all)
$ mkdir ~/far2l
$ make install DESTDIR=~/far2l

If you prefer to install it to your system instead of to a temporary directory, then omit the DESTDIR=~/far2l option.

To launch far2l, invoke the binary stored in the bin subdirectory of your install path. For instance:

$ ~/far2l/local/bin/far2lUsing far2l

When you first launch far2l, it creates a configuration directory in ~/.config and prompts you to choose what font you'd like to use. On my system, 16 pt font size was the default, and anything less than that was impossible to read. I used the open source Fantasque Mono Regular as my font, but any monospace font ought to work.

Far2l is a two-panel file manager, meaning that the default view has a place to display two separate directories. At launch, both directories happen to be your home directory. To maximize the amount of screen space used for listing files, far2l uses two columns in each panel, and you can use the Left and Right arrows to change from one column to the other.

In the right column, you can also use the Right arrow to move "down" the list of files by one screen. In the left column, use the Left arrow to move "up" the list of files by one screen.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

This navigation takes some getting used to, especially if you're used to terminal file managers that only use the Right arrow to descend into a directory. However, once you get used to far2l's navigation, you're likely to appreciate the added speed you gain from this simple pagination.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Open a file or folder

To open a folder, select a folder in your file list and press the Return key. This causes the active panel to change to a view of that directory. The inactive panel doesn't change, so it's not uncommon for far2l to always be showing two different directories at the same time. That's a feature of the two-panel file manager design, although it can take some getting used to if you're not in the habit of splitting windows.

After you've moved into a directory, you can move back into its parent folder by selecting the double dots (..) at the top of the file listing and pressing Return.

To open a file, select a folder in your file list and press the Return key. The file opens according to your desktop's mimetype preferences.

Panel and window navigation

To move from one panel to another, press the Tab key.

The fun thing about far2l is that its file listing is actually a layer over the top of your terminal. To hide the file listing temporarily, and to reveal it once it's gone, press Ctrl+O (that's the letter O not the digit zero).

You can also adjust how much of your terminal the file panels take up. Press Ctrl+Up and Ctrl+Down to adjust the vertical size of the file panels.

Make no mistake, though, you're not just suspending far2l when you access the terminal underneath. This isn't your usual terminal, it's a far2l terminal that interacts with the file manager and adds a few features to your standard terminal experience. For example, the find command gains graphical auto-completion.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Copying and moving files

All the usual file management functions are available within far2l are available with function keys. These are listed along the bottom of the far2l window. There are lots of options for some of the actions, which is either over-complex or really really powerful, depending on your preference.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Exiting far2l

To close far2l, type exit far into the command prompt at the bottom of the far2l window.

Far out

Far2l is a dynamic and responsive text-based file manager. If you're a fan of classic two-panel file managers, then you'll feel at home with far2l. Far2l provides an interesting and novel interpretation of a terminal, and if you don't try far2l for its two-panel file management, you should at least try it for its terminal.

Far2l runs in the Linux terminal and is designed around a plug-in structure, enabling compatibility with SSH, WebDAV, NFS, and more.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages