opensource.com

Subscribe to opensource.com feed
Updated: 1 hour 21 min ago

How I manage files from the Linux command line

Wed, 07/27/2022 - 15:00
How I manage files from the Linux command line Jim Hall Wed, 07/27/2022 - 03:00 1 reader likes this 1 reader likes this

Managing files in a graphical desktop like GNOME or KDE is an exercise in point-and-click. To move a file into a folder, you click and drag the icon to its new home. To remove a file, you drag it into the “Trash” icon. The graphical interface makes desktop computing easy to use.

But we don't always interact with Linux systems with a graphical interface. If you work on a server, you likely need to use the command line to get around. Even desktop users like me might prefer to interact with their system through a terminal and command line. I tend to rely on a few commands to manage my files from the command line:

List files with Linux ls

For anyone who uses the command line, you can't get far without seeing what's there. The ls command lists the contents of a directory. For example, to look at what's in a web server's document root in /var/www/html, you can type:

ls /var/www/html

Most of the time, I use ls to look at the directory I'm in. To do that, just type ls to list everything. For example, when I'm in the root directory of my web project, I might see this:

$ ls about fontawesome fonts index.php styles docs fontawesome.zip images prism

The ls command has about 60 command line options that can list files and directories in all kinds of ways. One useful option is -l to provide a long or detailed listing, including permissions, file size, and owner:

$ ls -l total 6252 drwxrwxr-x. 2 jhall jhall 4096 Jun 22 16:18 about drwxr-xr-x. 2 jhall jhall 4096 Jun 25 16:35 docs drwxr-xr-x. 2 jhall jhall 4096 Jun 7 00:00 fontawesome -rw-r--r--. 1 jhall jhall 6365962 Jun 2 16:26 fontawesome.zip drwxrwxr-x. 2 jhall jhall 4096 Jun 22 16:17 fonts drwxr-xr-x. 2 jhall jhall 4096 Jun 25 13:03 images -rw-rw-r--. 1 jhall jhall 327 Jun 22 16:38 index.php drwxrwxr-x. 2 jhall jhall 4096 Jun 22 16:18 prism drwxrwxr-x. 2 jhall jhall 4096 Jun 22 16:17 styles

File sizes are shown in bytes, which may not be useful if you are looking at very large files. To see file sizes in a format that is helpful to humans, add the -h or --human-readable option to print sizes with G for Gigabyte, M for Megabyte, and K for Kilobyte:

$ ls -l --human-readable total 6.2M drwxrwxr-x. 2 jhall jhall 4.0K Jun 22 16:18 about drwxr-xr-x. 2 jhall jhall 4.0K Jun 25 16:35 docs drwxr-xr-x. 2 jhall jhall 4.0K Jun 7 00:00 fontawesome -rw-r--r--. 1 jhall jhall 6.1M Jun 2 16:26 fontawesome.zip drwxrwxr-x. 2 jhall jhall 4.0K Jun 22 16:17 fonts drwxr-xr-x. 2 jhall jhall 4.0K Jun 25 13:03 images -rw-rw-r--. 1 jhall jhall 327 Jun 22 16:38 index.php drwxrwxr-x. 2 jhall jhall 4.0K Jun 22 16:18 prism drwxrwxr-x. 2 jhall jhall 4.0K Jun 22 16:17 styles

Rather than 6365962 for the file size, ls now displays the zip file as 6.1M or just over 6 MB in size.

View files with Linux cat, head, and tail

The next step after listing files is examining what each file contains. For that, I use a few commands. Starting with the docs directory on my web server:

$ ls docs chapter1.tex chapter4.tex chapter7.tex lorem.txt chapter2.tex chapter5.tex chapter8.tex readme.txt chapter3.tex chapter6.tex chapter9.tex workbook.tex

What are these files? Fortunately, this directory has a readme.txt file, which I might assume contains a description of the files in this project directory. If the file is not too long, I can view it using the cat command:

$ cat docs/readme.txt This is the workbook for the C programming self-paced video series. The main file is the workbook.tex file, which includes the other chapters.

If a file is very long, I can look at just the first few lines using the head command. This displays a certain number of lines from the file, usually the first 10 lines unless you tell head otherwise with the -n or --lines option. For example, these two versions of the head command examine the first three lines of the lorem.txt file:

$ head -n 3 docs/lorem.txt Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam at ligula eget nunc feugiat pharetra. Nullam nec vulputate augue. Suspendisse tincidunt aliquet $ head --lines=3 docs/lorem.txt Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam at ligula eget nunc feugiat pharetra. Nullam nec vulputate augue. Suspendisse tincidunt aliquet

If I instead wanted to see the last few lines of a file, I can use the tail command in the same way. Again, these two tail commands each show the last three lines of the lorem.txt file:

$ tail -n 3 docs/lorem.txt egestas sodales. Vivamus tincidunt ex sed tellus tincidunt varius. Nunc commodo volutpat risus, vitae luctus lacus malesuada tempor. Nulla facilisi. $ tail --lines=3 docs/lorem.txt egestas sodales. Vivamus tincidunt ex sed tellus tincidunt varius. Nunc commodo volutpat risus, vitae luctus lacus malesuada tempor. Nulla facilisi.

Using head and tail are also useful when examining log files on a server. I have a small web server I run on my at-home network to test websites before I make them live. I recently discovered that the web server's log is quite long, and I wondered how old it was. Using head, I printed just the first line to see that the log file was created in December 2020:

$ ls -l --human-readable /var/log/httpd total 13M -rw-r--r--. 1 root root 13M Jun 25 16:23 access_log -rw-r--r--. 1 root root 45K Jun 2 00:00 error_log $ sudo head -n 1 /var/log/httpd/access_log 10.0.0.177 - - [05/Dec/2020:14:58:35 -0600] "GET / HTTP/1.1" 403 5564 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

[ Related read: Getting started with the Linux cat command ]

Delete files with Linux rm

In my directory with the sample text files, the lorem.txt file contains Lorem Ipsum text. This is just dummy text used in the printing industry, so the lorem.txt file doesn't really belong in this project. Let's delete it. The rm command removes a file like this:

$ ls docs chapter1.tex chapter4.tex chapter7.tex lorem.txt chapter2.tex chapter5.tex chapter8.tex readme.txt chapter3.tex chapter6.tex chapter9.tex workbook.tex $ rm docs/lorem.txt $ ls docs chapter1.tex chapter4.tex chapter7.tex readme.txt chapter2.tex chapter5.tex chapter8.tex workbook.tex chapter3.tex chapter6.tex chapter9.tex

The rm command is dangerous, because it removes a file without the intervention of a trash or recycle bin. It's much safer to install a trash command, such as trashy or trash-cli. Then you can send files to a staging area before deleting them forever:

$ rm docs/lorem.txt

Managing files on the command line requires only a few commands. The ls command lists the contents of a directory, and cat, head and tail show the contents of files. Use rm or a safe "trash" command to remove files you don't need. These five commands will help you manage your files on any Linux system. To learn more, including the options available, use the --help option to see a summary of how to use each command, such as ls --help to see how to use the ls command.

If you prefer to interact with your system through the terminal, check out my favorite Linux commands for managing files.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My honest review of the HP Dev One

Wed, 07/27/2022 - 15:00
My honest review of the HP Dev One Anderson Silva Wed, 07/27/2022 - 03:00 1 reader likes this 1 reader likes this

A few weeks ago, HP joined the bandwagon of major laptop manufacturers releasing a Linux-based laptop, the HP Dev One. The brand joins others such as Lenovo and Dell, offering a laptop with a pre-installed distribution of Linux in the US market. HP joined forces with smaller Linux-based laptop brand System76 to pre-install Pop!_OS as their distribution of choice on the device. Pop!_OS is a Ubuntu-based distribution, which System76 started (and is currently the primary maintainer) to maximize the features of its own laptops sold on its website.

This article is a quick look at the HP Dev One, including first impressions of the hardware itself and running the pre-installed Pop!_OS and then Fedora on it after a few days. It is not about comparing them, just a few notes on how well they did on the HP Dev One.

HP Dev One hardware

I haven’t owned an HP laptop in over a decade. I don’t even remember why I distanced myself from the brand, but somehow it just happened. So, when I read about the HP Dev One, several things sparked my interest. Here’s a list of them. Some may be silly or nit-picking, but they still carried some weight in my decision:

  • The most obvious reason was it came with Linux and not Windows.
  • I have never used Pop!_OS, so the fact that HP chose Pop!_OS made me curious to use it.
  • I have never owned an AMD-based laptop. The HP Dev One comes with an AMD RYZEN™ 7 PRO 5850U Processor with eight cores and 16 threads.
  • The specs versus price seemed good. The price is $1099 USD, which is very reasonable compared to other brands with similar specs.
  • No Windows key on the keyboard. Instead, it says “super,” which I think is cool.
  • Upgradeable RAM. The laptop comes with 16 GB of RAM, but unlike so many laptops nowadays, it is not soldered on the board, so you can upgrade it (more on upgrading below).
  • The laptop was in stock with a commitment for fast shipping.
  • Reviews were favorable.

For all of the reasons above, I ordered it, and two days later, I had the HP Dev One on my doorstep.

Image by:

(Anderson Silva, CC BY-SA 4.0)

By the time the laptop arrived, the extra 64 GB of RAM I had ordered had also arrived, so the first thing I wanted to do was upgrade the RAM. It turned out that the bottom plate of the HP Dev One has very small, special (yet not proprietary) screws, so I had to run to the hardware store to get the proper screwdriver.

Image by:

(Anderson Silva, CC BY-SA 4.0)

I agree with other online reviews regarding the quality of the laptop. It does feel sturdy. The trackpad feel is good enough, and I had no issue with it. I found the keyboard not to be as good as some other reviewers claim. To me, the keys are a little heavy, and they feel almost a bit like silicone or rubber. I didn't find it terribly comfortable. In fact, I am typing this article in the HP Dev One, and I almost feel like I need to take a break here and there to let my fingertips rest.

The 1080p screen is bright, but also very reflective. If you are a Thinkpad trackpoint fan, you will definitely enjoy this feature on the HP Dev One. The backlit keyboard is nice, and the built-in camera cover is something more laptops should adopt.

Image by:

(Anderson Silva, CC BY-SA 4.0)

One question or possible issue I have with the HP Dev One is the fact that their website talks about the one-year customer service and warranty on the machine, but I haven’t been able to find a way to extend that warranty or even upgrade to something more premium like an onsite or next day part replacement in case I were ever to need it.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Pop!_OS on HP Dev One

As previously mentioned, I’ve never used Pop!_OS. I’ve used Ubuntu and many other distributions, but to me, Pop!_OS was a relatively familiar yet new experience. Here are a few notes:

The good
  • Coming from a Fedora and mostly vanilla GNOME background, the four-finger gesture in Pop!_OS took a little getting used to, but once I got into it, it made navigating their implementation of GNOME borderline fun.
  • The Pop!_OS’s software shop is called Pop!_Shop. It contains a great variety of software without any need to enable special repositories. It is easy to use, and installation is fast.
  • The integration between Pop!_OS and the hardware really is well done. Congratulations to both System76 and HP engineers for putting this machine together.
  • Pop!_OS has a nice built-in feature for backup and restoring your installation w/o destroying your home directory.
  • I installed Steam and a few games, and it worked pretty well.
  • I ran a couple of test containers with podman, and it worked very nicely as well.
  • I installed a virtual machine, and it ran very smoothly, too.
The not so good
  • The default wallpaper that comes with the HP Dev One looks a bit fuzzy to me. For such a nice machine, it feels like the wallpaper is running at the wrong resolution. To be fair, there are other wallpapers to choose from in the Settings.
  • The out-of-the-box fonts for Pop!_OS could be better. I find them hard to read, and in my opinion, it makes the UI too crammed.
  • Adding the USB-C powered 4K monitor worked OK, but my eyes noticed slight flickering in some parts of the screen. Could this be an X11 issue, given that Pop!_OS defaults to X11?
Fedora on HP Dev One

I played around with Pop!_OS for about two days before deciding to boot into Fedora using a live USB media. I did that first to see what type of hardware detection Fedora could do out of the box. To my surprise, everything worked right away. That’s when I decided to wipe the entire 1 TB SSD and install Fedora on the HP Dev One. As promised, this is not a Fedora vs. Pop!_OS comparison article; it is merely a few notes on both distributions running on this Linux-focused hardware.

In case you haven’t read my bio in this article, and for the sake of transparency, I am a Fedora contributor, so it is fair for me to say that I am biased towards the Fedora distribution, but don’t let that make you think I recommend Fedora over Pop!_OS on the HP Dev One. They are both great distributions, and they both run very nicely on it. Take your pick!

I can tell you that Fedora runs smoothly on the HP Dev One, and although there may be some performance tuning to match some of the benchmark numbers against Pop!_OS, I have been very pleased with its performance. Using the three-finger gestures to move between virtual desktops is a lot more natural to me than the four-finger ones in Pop!_OS, and I’ve been able to run Steam and Proton-based games on Fedora just like Pop!_OS.

The only comparison I will make is that when using the secondary USB-C 4K monitor with Fedora, I did not experience any flickering. Was it because of Wayland?

Final thoughts

I’ve had the HP Dev One for a little over four days now, and I’ve run Pop!_OS and Fedora on it so far. I even restored Pop!_OS after a full Fedora installation, which was a very easy process. Somehow, Pop!_OS detected it was an HP Dev One and did all the needed installation, including the HP-based wallpapers, without me having to do any extra steps.

As I finished this article, yet again, I went back to Fedora (force of habit), but I wouldn’t have any issue staying on Pop!_OS on the HP Dev One. Who knows, maybe I might even try different distributions in the future.

At the end of the day, the HP Dev One is a solid Linux laptop without a Windows key and no AMD, Intel, or Windows stickers on it. It is fast, feels well built, and is reasonably priced especially given how quickly it ships to you (US only). I would love to see HP provide more documentation on their website about extending the warranty, and I hope they will be able to make this laptop available in other parts of the world.

Here are my first impressions of the hardware, running the pre-installed Pop!_OS, and running Fedora on HP's new Linux-based laptop.

Image by:

Opensource.com

Hardware Linux What to read next Unboxing the latest Linux laptop from System76 This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use Bash to automate tasks on Linux

Tue, 07/26/2022 - 15:00
How I use Bash to automate tasks on Linux Jim Hall Tue, 07/26/2022 - 03:00 1 reader likes this 1 reader likes this

The Bash command line is a great way to automate tasks. Whether you are running Linux on a server and need to manipulate log files or other data, or you're a desktop user who just wants to keep files tidy, you can use a few automation features in Bash to make your work easier.

Linux for command: Automate tasks on a files

If you have a bunch of files to work on at once, and you need to do the same thing with every file, use the for command. This command iterates across a list of files, and executes one or more commands. The for command looks like this:

for variable in list
do
    commands
done

I've added some extra spacing in there to help separate the different parts of the for command. That multi-line command might look difficult to run on the command line, but you can use ; to put everything on one line, like this:

for variable in list ; do commands ; done

Let's see it in action. One way I use the for command is to rename a bunch of files. Most recently, I had a bunch of screenshots that I wanted to rename. The screenshots had names like filemgr.png or terminal.png and I wanted to put screenshot before each name instead. I ran a single for command to rename thirty files at once. Here's an example with just two files:

$ ls
filemgr.png  terminal.png
$ for f in *.png ; do mv $f screenshot-$f ; done
$ ls
screenshot-filemgr.png  screenshot-terminal.png

The for command makes it easy to perform one or more actions on a set of files. You can use a variable name that is meaningful to you, such as image or screenshot, or you can use a "shorthand" variable like f, as I did in my example. When I write scripts that use a for loop, I try to use meaningful variable names. But when I'm using for on the command line, I'll usually use a short variable name like f for files or d for directories.

Whatever name you choose for your variable, be sure to reference the variable using $ in the command. This expands the variable to the name of the file you are acting on. Type help for at your Bash prompt to learn more about the for command.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Linux conditional execution (if)

Looping across a set of files with for is helpful when you need to do the same thing with every file. But what if you need to do something different for certain files? For that, you need conditional execution with the if statement. The if statement looks like this:

if test
then
    commands
fi

You can also do if/else tests by using the else keyword:

if test
then
    commands
else
    commands
fi

For more complicated processing, you can use if/else-if/else evaluations. I might use this in a script, when I need to automate a job to process a collection of files at once:

if test
then
    commands
elif test2
then
    commands
elif test3
then
    commands
else
    commands
fi

The if command allows you to perform many different tests, such as if a file is really a file, or if a file is empty (zero size). Type help test at your Bash prompt to see the different kinds of tests you can use in an if statement.

For example, let's say I wanted to clean up a log directory that had several dozen files in it. A common task in log management is to delete any empty logs, and compress the other logs. The easiest way to tackle this is to just delete the empty files. There isn't an if test that exactly matches that, but we have -s file to test if something is a file, and if the file is not empty (it has a size). That's the opposite of what we want, but we can negate the test with ! to see if something is not a file or is empty.

Let's look at an example to see this at work. I've created two test files: one is empty, and the other contains some data. We can use if to print the message "empty" if the file is empty:

$ ls
datafile  emptyfile
$ if [ ! -s datafile ] ; then echo "empty" ; fi
$ if [ ! -s emptyfile ] ; then echo "empty" ; fi
empty

We can combine this with for to examine a list of log files to delete the empty files for us:

$ ls -l
total 20
-rw-rw-r--. 1 jhall jhall 2 Jul  1 01:02 log.1
-rw-rw-r--. 1 jhall jhall 2 Jul  2 01:02 log.2
-rw-rw-r--. 1 jhall jhall 2 Jul  3 01:02 log.3
-rw-rw-r--. 1 jhall jhall 0 Jul  4 01:02 log.4
-rw-rw-r--. 1 jhall jhall 2 Jul  5 01:02 log.5
-rw-rw-r--. 1 jhall jhall 0 Jul  6 01:02 log.6
-rw-rw-r--. 1 jhall jhall 2 Jul  7 01:02 log.7
$ for f in log.* ; do if [ ! -s $f ] ; then rm -v $f ; fi ; done
removed 'log.4'
removed 'log.6'
$ ls -l
total 20
-rw-rw-r--. 1 jhall jhall 2 Jul  1 01:02 log.1
-rw-rw-r--. 1 jhall jhall 2 Jul  2 01:02 log.2
-rw-rw-r--. 1 jhall jhall 2 Jul  3 01:02 log.3
-rw-rw-r--. 1 jhall jhall 2 Jul  5 01:02 log.5
-rw-rw-r--. 1 jhall jhall 2 Jul  7 01:02 log.7

Using the if command can add some intelligence to scripts, to perform actions only when needed. I often use if in scripts when I need to test if a file does or does not exist on my system, or if the entry the script is examining is a file or directory. Using if allows my script to take different actions as needed.

Bash has a few handy automation features that make my life easier when working with files on Linux.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Shrink PDFs with this Linux tool

Tue, 07/26/2022 - 15:00
Shrink PDFs with this Linux tool Howard Fosdick Tue, 07/26/2022 - 03:00 1 reader likes this 1 reader likes this

Excluding HTML, PDF files are probably the most popular document format on the web. Unfortunately, they're not compact. For example, I like to download free eBooks. A quick glance at my eBook directory shows that its 75 PDF files consume about 500 megabytes. On average, that's over 6.6 MB for each PDF file.

Couldn't I save some storage space by compressing those files? What if I want to send a bundle of them through email? Or host them for download on a website? The transmission would be faster if these files were made smaller. This article shows a simple way to reduce PDF file size. The benefit is that it shrinks your PDFs transparently without altering the data content in any way. Plus, you can also compact many PDF files with a single command.

Compare this to the alternatives. You could upload your PDF files to one of the many online file compression websites. Several are free, but you risk the privacy of your documents by uploading them to an unknown website. More importantly, most websites shrink PDFs by tampering with the images they contain. They either change their resolution or their sizes. So you trade lower image quality to get smaller PDF files. That's the same trade-off you face using interactive apps like LibreOffice, or Ghostscript line commands like gs and ps2pdf. The technique we'll illustrate in this article compacts PDFs without altering either the images they contain or their data content. And you can reduce many PDFs with a single line command. Let's get started.

Identify and delete big unused PDFs on Linux

Before you spend time and effort compacting PDF files, identify your largest ones and delete those you don't need. This command lists the 50 biggest PDFs in its directory tree, ordered by descending size:

$ find  -type f  -exec  du -Sh {} +  |  grep .pdf | sort -rh  |  head -n 50

From the output, you can easily identify and eliminate duplicates. You can also delete obsolete files. Getting rid of these space hogs yields big benefits. Now you know which PDFs are the high payback candidates for the reduction technique we'll now cover.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Transparently compact PDFs

We'll use the open source Minuimus program to compact PDFs. Minuimus is a generalized command-line utility that performs all sorts of useful file conversions and compressions. To shrink PDFs, Minuimus unloads and then rebuilds them, gaining numerous efficiencies along the way. It does this transparently, without altering your data in any way.

To use Minuimus, download its zip file. Then install it as its documentation explains, with these commands:

$ make deps      # Installs all required supporting packages
$ make all       # Compiles helper binaries
$ make install   # Copies all needed files to /usr/bin

Minuimus is a Perl script, so you run it like this:

$ minuimus.pl  input_file.pdf    # replaces the input file with compressed output

When it runs, Minuimus immediately makes a backup of your original input file. It only replaces the input file with its compacted version after it fully verifies data accuracy by comparing before and after bitmaps representing the data.

A big benefit to Minuimus is that it validates any PDF file it works on. I've found that it gives intelligent, helpful error messages if it encounters any problems. For example, on one of my computers, Minuimus said that it couldn't properly invoke a utility it uses called leanify. Yet it still shrunk the PDFs and ran to successful completion.

Here's how to compact many files in one command. This compresses all the PDF files in a directory:

$ minuimus.pl *.pdf

If you have lots of PDFs to convert, Minuimus might process for a while. So if you're converting hundreds of PDFs, for example, you might want to run Minuimus as a background job. Schedule it for off-hours through your GUI scheduler or as a Cron job.

Be sure to redirect its output from the terminal to files so that you can easily review it later:

$ minuimus.pl *.pdf  1>output_messages.txt  2>error_messages.txtHow much space will you reclaim?

Unfortunately, there's no way to predict how much space Minuimus can save. That's because PDFs contain anything from text to images of all different kinds. They vary enormously. I ran Minuimus on my download directory of PDF books. The directory contained 75 PDFs consuming about 500 MB. Minuimus reduced it by about 11%, to about 445 MB. That's impressive for an algorithm that doesn't change the data.

Across a large group of PDFs, size reduction of 10% to 20% appears common. The biggest files often shrink the most. Processing a collection of big PDFs often reclaims much more space than processing many small PDFs. Some PDF files show really dramatic space savings. That's because some applications create absolutely hideous PDFs. I call those files "PDF monsters." You can slay them with a single Minuimus command.

For example, while writing this article, Minuimus knocked an 85 megabyte PDF down to 32 meg. That's just 38% of its original size. The program slimmed several other monsters by 50%, recovering tens of megabytes. This is why I began this article by introducing a command to list your biggest PDF files. If Minuimus identifies a few monsters you can slay, you can reclaim major disk space for free.

Shrink PDFs with Minuimus

PDF files are useful and ubiquitous. But they often consume a good deal of storage space. Minuimus makes it easy to reduce PDF storage space by 10% to 20% without altering the data. Perhaps its biggest benefit is identifying and transforming malformed "PDF monsters" into smaller, more manageable files.

Minuimus is an open source program used to reduce PDF storage space by 10% to 20% without altering the data.

Linux What to read next Shrink PDF size with this command line trick This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Tour Collabora Online, an open source alternative to Google Workspace

Mon, 07/25/2022 - 15:00
Tour Collabora Online, an open source alternative to Google Workspace Heike Jurzik Mon, 07/25/2022 - 03:00 Register or Login to like Register or Login to like

Setting up your own cloud office is an important step towards digital sovereignty. Collabora Online is an open source office suite for the cloud or on-premises that protects users' privacy and allows them to keep full control of their data. The software is developed by Collabora Ltd, Cambridge, UK, and their team working around the world. Collabora Online is based on LibreOffice Technology and is primarily licensed under the Mozilla Public License 2.0.

There are two editions available from the supplier:

Collabora Online is an open source alternative to Microsoft 365 or Google Workspace. The LibreOffice-based online office suite supports all major document, spreadsheet, and presentation file formats and collaborative editing.

Get started with Collabora Online

Before I look at some of Collabora Online's features, I'll just clarify: Collabora Online is not stand-alone software. Instead, the online office suite integrates into an existing infrastructure and requires a cloud solution as a foundation, such as EGroupware, Nextcloud, ownCloud, or Pydio. The Collabora website also lists Alfresco, Microsoft SharePoint 2016 Server, Nuxeo, or Moodle as integration options.

Alternatively, there is a supported appliance for Collabora Online built on Univention Corporate Server (UCS), a Debian-based enterprise Linux distribution with an integrated identity and infrastructure management system. The Univention App Center offers two appliances, one with Nextcloud and one with ownCloud as the base.

If you don't want to install Collabora Online yourself, there is a list of partners—including hosting partners—who offer Collabora Online as a SaaS solution.

Standard file formats

Collabora Online works in any modern web browser; no plug-ins or add-ons are necessary. The cloud office includes a word processor (Writer), a spreadsheet (Calc), and presentation software (Impress). The icing on the cake is an application for creating vector graphics (Draw)—neither Google nor Microsoft offer a dedicated drawing program.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Apart from the Open Document Format (ODT, ODS, ODP, etc.), Collabora Online can open and process various Microsoft Office formats (DOC/DOCX, XLS/XLSX, PPT/PPTX). Basically, the cloud office supports the same file formats as LibreOffice, including older standards and exotics. Exchanging documents with other users works well, provided everyone knows how to work with style sheets and the difference between tabs and spaces. This knowledge applies to importing and exporting files between different office suites—online or offline—as well.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Look and feel of Collabora Online

Starting with version 6.4, Collabora Online offers the Tabbed view as an alternative user interface to the classic Compact view. It's not only designed to provide easy access to all features but also helps Microsoft Office users get used to the interface.

In current Collabora versions, users can choose their preferred look and feel via the View menu. In previous versions, the administrator had to adjust the server-side settings by editing the file loolwsd.xml (after version 21.11, this was called coolwsd.xml), section user_interface.

Please note that changing the XML file affects the look and feel for all users. EGroupware offers to modify the toolbar individually. Users choose their preferred interface in the personal settings, section File manager, tab Collabora Online. Here they can select Standard Toolbar from the drop-down menu and click Apply.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Collabora Online vs Microsoft 365 vs Google Workspace

Here are some key collaboration features to see how Collabora Online performs compared to Microsoft 365 and Google Workspace:

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Image by:

(Heike Jurzik, CC BY-SA 4.0)

  • Version history: This depends on the integration, i.e., the underlying host system. Most platforms list existing versions with timestamps. For example, in EGroupware, you must open a document and select File/See history. Nextcloud and ownCloud offer a version history via the integrated file manager (click on the dots and select Details).
  • Manage changes: The menu Review not only contains a spellchecker and a thesaurus, but it also offers functions for commenting, recording, and managing changes. Users are given different colors when editing documents collaboratively, and the changes appear in near real-time. Added text appears highlighted and underlined, and deleted text is highlighted and crossed through.
  • Comments: Marking words, lines, and entire sections, leaving and replying to comments, marking threads as resolved and unresolved—it all works like a charm.
  • Shares: Sharing documents with other users (internally and externally) is straightforward. In integrations like Nextcloud and ownCloud, this all happens via the file manager and the sharing icon. The Sharing tab shows existing shares, offers to create public links, and sets up passwords and expiration dates. In EGroupware, you must right-click a document in the file manager and select the Share/Writable Collabora Online link. Alternatively, you can use the shared folders feature on all platforms to grant internal access to your documents.
Mobile access to your office suite

Mobile apps available for Android and iOS allow you to edit documents on your smartphone or tablet and even work offline. While all basic functions work fine, there is still room for improvement in formatting documents.

In any case, it's more convenient to work in the web browser of mobile devices. Thanks to the responsive layouts of the underlying platforms, this works well. Even collaborative and simultaneous editing of documents in Collabora Online runs smoothly. Admittedly, reading long documents is not much fun on a small display, and typing longer texts on the keyboard of a smartphone or tablet is not ideal either—but it works.

Control your office suite the open source way

There are no major differences between the cloud offices of the big players—at least for end users. All basic functions work fine, and there are no complaints about importing and exporting documents and dealing with different office formats. The biggest challenge might be installing and setting up the cloud office on a dedicated server with another cloud solution as a foundation. Regardless of your integration, you must take care of the installation and maintenance yourself.

Admittedly, this requires a little more effort than setting up an account with Google or Microsoft, but the data is stored on your own server or in a private cloud if you opt for a SaaS solution. Anyone who values digital sovereignty and wants to retain complete control of their data should consider it. Check out the online demo if you want to give Collabora Online a try.

Collabora Online is an open source office suite for the cloud or on-premises that protects users' privacy and allows them to keep full control of their data.

Image by:

Opensource.com

Alternatives What to read next Get started with EGroupware, an open source alternative to Microsoft 365 This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to use LibreOffice Writer templates

Mon, 07/25/2022 - 15:00
How to use LibreOffice Writer templates Jim Hall Mon, 07/25/2022 - 03:00 Register or Login to like Register or Login to like

A staple in any office software suite is the word processor. Whether your needs are small or large, from jotting down a note to writing a book, the word processor gets the job done. Most Linux distributions include the LibreOffice suite, and I use LibreOffice Writer as my word processor.

LibreOffice Writer provides lots of flexibility through its toolbar, keyboard shortcuts, and menus. But if you just want to start a document without too much hassle, you can use one of the pre-loaded templates. Here's how to use LibreOffice Writer templates to make your work easier.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Start a new document

LibreOffice Writer starts with a blank document. Most folks just begin writing, but this is also the place to create a new document from a template.

First, open the File menu, then select New and Templates. This option opens the Templates selection:

Image by:

(Jim Hall, CC BY-SA 4.0)

The Templates selection dialog shows the different templates available on your system. The default LibreOffice Writer installation includes templates for different kinds of business letters, resumes, and other documents. You can browse the list or narrow the results with the filter options at the top of the dialog.

Image by:

(Jim Hall, CC BY-SA 4.0)

Click on the template you want and click Open to start a new Writer document using this template. Some templates include boilerplate text or other sample material you can use to get started in your new document. For example, the Modern business letter consists of this "lorem ipsum" sample text:

Image by:

(Jim Hall, CC BY-SA 4.0)

Other document templates just give you a starting point in an empty document with some nice-looking defaults. For example, the Modern document template uses a sans-serif font (such as Carlito on Linux systems) for the text body:

Image by:

(Jim Hall, CC BY-SA 4.0)

Download a template

You can download a suitable document template from LibreOffice's website if you don't find the template you're looking for in the built-in choices. Navigate to LibreOffice Extensions to start working with the LibreOffice extensions and templates library.

Image by:

(Jim Hall, CC BY-SA 4.0)

Enter a search term in the box to find the document template you need. For example, students might search for "APA" to find document templates already set up for APA style, a common style for academic papers.

Image by:

(Jim Hall, CC BY-SA 4.0)

Wrap up

If you need to write a document, explore the LibreOffice templates to find one that works for you. Using templates means you spend less time setting up a document to look a certain way and instead get to work faster. Look for other document templates in the LibreOffice extensions and templates library that support your work.

Get started writing on Linux in a flash by using a LibreOffice template.

Image by:

Original photo by jetheriot. Modified by Rikki Endsley. CC BY-SA 2.0.

LibreOffice What to read next How to make LibreOffice templates to save time This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The secret to making self-organized teams work in open source

Mon, 07/25/2022 - 15:00
The secret to making self-organized teams work in open source Kelsea Zhang Mon, 07/25/2022 - 03:00 Register or Login to like Register or Login to like

Managers and executives are in the business of managing people and resources. Because many of our models for organization are built with the assumption that there's a manager involved, there's an expectation that there's some level of control over all the moving parts of the mechanisms we build. And for that reason, when you propose the idea of self-organizing teams to a manager, the response is often that it's just not possible. Surely everything will spin out of control. Surely the only way to maintain momentum and direction is through the guidance of a project manager and a technical manager.

It's not just management that gets confounded by the concept of self-organizing teams, though. It can give pause to team members, too. After all, in traditional organizations a developer only needs to deal with the technical manager. However, in a self-organizing team, a developer has to face at least seven or eight pairs of eyes (their team members.) That can be a pretty overwhelming prospect.

Is it possible for self-organizing teams to thrive?

Take the example of a team I have coached. One day at a daily standup meeting, a technical manager stood behind the team without saying a word.

Leeroy: "I have no tasks to claim today."

Jenkins: "Why? There's one unclaimed task."

Leeroy: "That one? I'm not sure what its requirements are."

Jenkins: "Why didn't you mention you weren't clear in the backlog refinement meeting?"

Leeroy: "There were so many people at the meeting, it was noisy, and I wasn't able to hear clearly."

Jenkins: "So why is everyone clear about its requirement except you?"

Leeroy: "Well…"

Jenkins: "You've mentioned before that you could support testing in addition to coding, haven’t you? You can go do testing."

Leeroy: "Alright!"

See, in a self-organizing team, everyone is accountable for the goal of the whole team, and everyone sees each other. It used to be that the technical manager was in charge of such arrangements, but now things get done without the technical manager even having to talk. The technical manager can cut out his micromanagement efforts and do what's more needed of them.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Work visualization

Self-organization doesn't mean you let everything go. You have to visualize your work while self-organizing.

In traditional organizations, a team member's work was in a "black box". What happens depends on how frequently the technical manager asked about it, and how honestly the team member answered. However, in a self-organizing team, the team's work is crystal clear. Whether it's on a physical task board, or in an electronic system, whoever wants to know the real-time status is able to find out, and without interrupting anyone else's work.

With such transparency in your work, do you still worry about projects getting out of control? In a self-organized team, you're able to analyze all kinds of data to find problems, and then take action to correct them over time. With self-organization, control is not weakened but enhanced.

Servant leadership

A manager still may be left wondering, "If everyone self-organizes, who do I organize? What's the value of my position?"

Self-organization is not the absence of management, nor does it require no management. It's instead the change of management style.

Without the burden of micromanagement, technical managers can create a good environment for the growth of team members. This includes:

Technical experts

With years of experience and deep knowledge of technology, the technical manager can take on the role of a technical expert. You could participate in technical reviews and code walkthroughs, as well as technical practices and technical platform improvements.

Take another example from a team I have coached:

One of the team members didn't adapt to new changes as easily as expected. He said that all of a sudden there was no one to assign him tasks, no one to do the detailed design. He felt that he had to start everything on his own.

To solve this problem, it was decided to form a technical advisory group consisting of the technical manager and several senior developers. However, the technical advisory group wasn't responsible for assigning tasks or for providing active supervision, it served only to provide support when team members needed advice or reviews. It allowed everyone to move away from the old working model and transition to a new working model, improving their overall competence and self-confidence.

In fact, technical managers reportedly feel more relaxed in the new working model. In the past, their subordinates just performed mechanical implementation or even dealt with the tasks assigned by the technical manager in a perfunctory way. All the pressure was on the technical manager's side.

In a self-organizing team, the pressure is shared by the whole team. Everyone's sense of responsibility is enhanced. The team discusses a problem together to make the solution more comprehensive and reliable.

Metrics drive

You might imagine a scrum master reacting to this concept with some reservation: "We can't move too fast. Now that we're self-organizing, each task in each iteration has no one to specify when it must be started and when it must be finished, and it's really getting a little out of control".

Self-organization is the final goal, not the starting point. You can't achieve self-organization in a few seconds, and it's not something you declare once and then forget about.

Instead of having to traditionally micromanage a team's work by giving them instructions at every step, scrum masters let everyone take initiative. However, scrum masters need to refine statistics and analysis of the data in every iteration to find where there's room for improvement. You might analyze whether everyone's schedule is reasonable, whether there are deviations from expected progress, and whether a task split is more conducive to cooperation. Through these analyses, scrum masters are able to help teams improve their self-management, as well as team's self-organization.

Coaching

Coaching isn't a way to reveal answers directly, nor is it a command-and-control instruction set. The subject of coaching has to solve problems, but the scrum master guides and inspires a team member to recognize potential improvement points and avenues that may lead to an innovative solution.

[ Related read: What's the difference between a manager and a coach? ]

The truth about self-organization

Self-organization requires effort and it requires skill. Ultimately, it builds stronger teams of stronger individuals.

Here are the important principles to keep in mind:

  • Create a blame-free working environment
  • Help to apply for budget for team building
  • Establish a proper mechanism to encourage everyone to read more and share more
  • Establish a proper valuation mechanism to help improve professional skills
  • Self-organization doesn't mean you can play it any way you want.
  • Self-organization is not the absence of management, but the change of management style.
  • Self-organization is not the absence of rules, but self-driven under commonly agreed rules.
  • Self-organization is not worry-free, but team members from all levels input greater and work hard for a common goal.
  • Self-organization doesn't mean that leaders can be hands-off bosses. In means they get closer to teams, and provide more support.

This article is translated from Xu Dongwei's blog and is republished with permission.

Self-organization requires effort and it requires skill. Ultimately, it builds stronger teams of stronger individuals.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Careers Project management What to read next 5 tips to avoid these common agile mistakes A comprehensive guide to agile project management This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why Design Thinking is a box office hit for open source teams

Sat, 07/23/2022 - 15:00
Why Design Thinking is a box office hit for open source teams Aoife Moloney Sat, 07/23/2022 - 03:00 Register or Login to like Register or Login to like

For the past several years, Design Thinking has been providing a way to enhance problem solving within teams, to ensure learning goals are met, and to increase team engagement. In our previous article, we discussed how we used movie posters to "pitch" our projects to stakeholders. In this article, we're going to review the many lessons we've learned from Design Thinking.

The box office reviews

"If you love what you do, then you’ll never work a day in your life" was the angle we were going for when we decided to break our self-imposed ceiling of needing to be formal and business-like to be able to both plan and deliver work. Our planning team was loving the reaction and engagement we were getting from our stakeholders and wider team from using the movie posters instead of documents. It was at this point that we began to realize how we created this arduous and quite frankly boring way of planning our team's work. We had imposed requirements on ourselves of needing to write up documents, transpose that information into a spreadsheet, and then (yes, a third time) into a slide deck. Product owners and planning teams have to do this four times a year, and the cycle was starting to feel heavy and burdensome to us.

It's crucial for a product owner or a manager (or whoever is planning a development team's work) to present information about what their team could potentially deliver to stakeholders so that the best decision on where to invest time and resources can be made. But that information had begun to feel like it was just text on a page or screen. Stakeholders weren't really buying into what our planning team was half-heartedly selling. If we didn't enjoy talking about our projects, how could we expect our stakeholders to enjoy discussing them? Our project planning team quickly realized we not only needed to introduce the movie posters for the wider teams' and stakeholders' best interests, but for our own sense of enjoyment in our roles, too. We needed to Kill Bill.

So with Uma Thurman leading the way in our new concept of using movie posters as cover stories, we found ourselves getting excited about creating new ones for each project request. We were going into each call looking forward to the few moments when the movie posters were unveiled, and those on the calls got a laugh about what was chosen. If they hadn't seen a particular movie before, they often shared that, which resulted in a side conversation where people shared movie trivia and began getting to know each other better. All of this from a software project brief in the style of a Kill Bill Vol. 2 movie poster. It was incredible to watch the interactions happen and relationships form. The conversations in those calls were freeform, unreserved, and extremely valuable to our planning team. Movie posters got everyone talking, which made it easier for participants on the call to ask for more detail about the project, too.

Our new and improved planning calls were a "box office smash" and the results spoke for themselves. Our quarterly planning calls went from being 90 plus minutes to under 45 minutes, with both team and stakeholders commenting on how included they felt in the decision making process. A lot of this came from developing and expanding on the requirements and insight gathering sessions we'd conducted in the run up to our quarterly planning calls. This was the last remnant of our formal, stiff approach but there was no denying how useful the information gained from those sessions could be to our projects. So we kept it simple again, and started to introduce the movie posters during what we coined the "insight sessions" instead. Our insight sessions were making a big difference by providing space for people to meet and converse about technical details, potential blockers, risks, and opportunities to leverage one piece of technology against another. This was all happening naturally. The ice had been broken with a reveal of a Ghostbusters poster or A Bugs Life. People turned on cameras and mics to get involved in the conversation about whether they had seen the movies, the remakes, or the original. It became easy for our planning team to guide the already flowing conversations back to the work at hand.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews That's a wrap

We were now delivering valuable, enjoyable, and important calls that were generating a lot of success for our team. Our project delivery success rate hit, and stayed at, 100%. We have never, for three years now, had to halt or drop a project once it's been actioned from quarterly planning. We attribute the high engagement levels on our calls, and we believe people are engaged because our calls are novel and fun while still being informative. We're capturing crucial information about each project before it even begins development, so our teams are able to hit the ground running.

Yes, planning needs to be considered and measured and thorough, but it's to no one's benefit when only one person is doing the thinking, talking, and measuring. It's important to create a culture of open communication and trust for product owners and project managers, so that they can plan work for their teams and bring real value to their stakeholders and end users. It's even more critical for development teams to have the space and tools to plan openly and collaboratively, both with each other and with adjacent teams.

You need the freedom to express your opinion at work. Why should planning processes try to box us into silos and acquiescence? That's a quick way to lose friends and gain robots.

Our team is of the firm belief that people deliver projects. We made some changes when we realized that we needed people, with all of their personalities and perspectives, to see what each project meant to them. Then we could determine which ones to work on. We needed the heart and spirit of Agile methodologies in a process that was open, collaborative, and fun for everyone.

This way of planning and working together is based on using visuals, and trying to enjoy the work we do and the time we spend together at work. It's a means to promote discussion and move those in planning roles from drivers to facilitators so that a unanimous decision can be reached through shared understanding of the work, and the potential trade-offs that comes with it. We chose movie posters, but the concept can be expanded to just about anything! This is our whole point. You don't have to limit your creativity to plan work. If you like music, make an album! Comics? Design one! Or maybe TV shows are more your speed. Take your pick, my friend—it's your cover story. Tell it in the way you enjoy the most, and enjoy not only the informative projects you will generate for your team, but also the sense of camaraderie everyone will feel from having the freedom and safety to grab a coffee or tea, join a call, and talk amongst friends about movies and technology.

You need the freedom to express your opinion at work. Why should planning processes try to box us into silos and acquiescence?

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Community management What to read next Put Design Thinking into practice with the Open Practice Library Build Design Thinking into your team processes How movie posters inspire engagement on our open source team This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 121 points

Experienced People and Project Manager with over 20 years of working in the information technology and services industries.

Open Minded Author Contributor Club 519 points Waterford, Ireland

Leigh is an Engineering Manager and passionate about process improvement and researching new ways to approach problems. He is an accredited ICF Coach and has led several Agile transformations within Red Hat.

| Follow leighgriffin Open Source Evangelist Author Contributor Club Register or Login to post a comment.

How movie posters inspire engagement on our open source team

Fri, 07/22/2022 - 15:00
How movie posters inspire engagement on our open source team Stefan Mattejiet Fri, 07/22/2022 - 03:00 Register or Login to like Register or Login to like

For the past several years, Design Thinking has been providing a way to enhance problem solving within teams, to ensure learning goals are met, and to increase team engagement. In our previous article, we discussed the problem with processes. In this article, we're going to describe how we made work more engaging.

The problem with engagement

We reached a point in our planning that we were able to articulate information about the upcoming pieces of work our team could take on. We solicited feedback from various stakeholders that represented different viewpoints, but we had no way to converse about the work properly to one another. It was all a one-way information exchange, with presentations and slides being spoken to a group of people, but not with a group of people. Worse still, it took hours to convey information.

The whole scenario felt forced and devoid of personality. Why would anyone want to take part, let alone enjoy, any aspect of our project? It was just another meeting in a cascade of meetings, during which most people probably did something else, while being talked at.

Something had to change, so we turned to Design Thinking to drum up some thoughts on how we could improve our process.

Design Thinking and Agile practices often advertise themselves as wonderful team and relationship building concepts. They can be, but as a team we had concentrated on the mechanics of Agile, and ended up with stiff, formal events that certainly got the job done, but curtailed any chance of creativity and natural conversation. After two quarterly planning cycles operating under our version of the "letter of the Agile Law" we knew there was a problem, and we wanted to fix it. Our next challenge was finding the right "fit".

We began by looking at our quarterly planning call itself. We asked ourselves whether this was really the best experience we could offer attendees of our quarterly planning events? These calls relied on participation and engagement from all parties to really make them a valuable tool to guide decisions. First and foremost, we wanted to get that without having to explain a massive shift in our way of doing things, or drowning people with explanations on our new methods. We decided to look through the lens of Design Thinking by putting the needs of our end users (the attendees of our calls) first. We asked ourselves: Were we in their position, would we come away from those calls feeling like we were involved in the decision making, or even that we enjoyed attending them? In short, the answer was no, we didn't believe we would.

[ Related read: 7 tips for giving and receiving better feedback ]

It was time to ask our attendees for some feedback on what they felt about how the sessions were being delivered, what value they derived from them. As is often the case when you ask questions like that, we didn't always get a great amount of feedback, or the feedback that we did get was vague. So we decided to make small incremental changes to our planning calls in an effort to increase engagement and to enable meaningful discussions. We would then garner feedback on these tweaks, and gauge the appetite for larger changes from our attendees as we progressed.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Incremental changes

First, we decided to change how we were demonstrating resourcing limitations per project to our stakeholders by reminding them that these projects all required real people, and weren't just numbers. Now we really did keep it simple here. In fact, all we did was add in some stick figures (dabbing, of course) that represented each team member so that people on the planning call could see exactly how many people each project needed. It was an instant hit.

Our stakeholders quickly appreciated something a little different to look at and it generated friendly, light conversation between participants. People were trying to guess which stick figure looked like who on the team (spoiler: they were all the same). After seeing how something so simple could instantly spark conversation, and even add a little fun into our calls, we knew we were on the right track. Time to make some real changes.

A picture is worth a thousand words, and at this point all we'd used were some stick figures. However, you can see how making a couple of small, simple changes can generate a positive impact.

Image by:

(Leigh Griffin, CC BY-SA 4.0)

You take a functional, but also kind of dull spreadsheet, add a bit of color, graphics, and a sprinkle of actual human language, and you end up with the kind of meeting that stands out amongst all of the others in people's calendars. This one is a little special. This one has a little bit more to it. Like the latest blockbuster movie that has people talking about it for days, this meeting ensures it has your full attention, and has you looking forward to seeing what happens in the sequels!

Image by:

(Leigh Griffin, CC BY-SA 4.0)

Movie posters

So the stick figures worked, uncomplicated language worked, the visuals worked. How could we build on this for an even better experience? The answer was closer at hand than we'd expected. One of our team members had some success using techniques and practices from the fantastic resource that is the Open Practice Library. We mentioned this free, community-built online resource in our first article. It was here we found the concept of a News Headlines, also known as Cover Stories, in which a project is broken down into key points, and is written in the style of a newspaper or magazine.

In one of the many brainstorming sessions we had reviewing the feedback, we landed on the idea of using movie posters for our cover stories. OK, truth be told we might have also been talking about the Oscars at the time, but it was a lightbulb moment and it instantly felt right. The creativity started flowing, and we began forming our project's movie posters for their upcoming "premiere" in the next quarterly planning call. The result? Uma Thurman in Kill Bill Vol. 2 capturing all you needed to know about upgrading two applications that run services based on status messages sent through other applications hooked into a larger database, critical for troubleshooting outages and performance issues. Obviously!

Image by:

(Leigh Griffin, CC BY-SA 4.0)

Here was a way to be creative in how our potential projects could be discussed and shared with our stakeholders, while still capturing all the key pieces of information. They still got the details about the work they would need to do, so they could form an opinion and reach a decision on what work holds the most value, and what deserves their time and resources. However, we wanted our team to have its own version of these cover stories, something that might resonate more with the people in our team.

In our next article, we'll talk about all the lessons we learned along the way.

Using Design Thinking, be creative in how potential projects can be discussed and shared with stakeholders, while still capturing all the key pieces of information.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Community management Careers Agile What to read next Put Design Thinking into practice with the Open Practice Library Build Design Thinking into your team processes This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 91 points Waterford, Ireland

Product Owner on the Community Platform Engineering team at Red Hat from Waterford, Ireland and in my spare time I'm a retired showjumper but full time 'groom' for my son who has followed in my hoof-prints - ah, foot-prints!

| Follow aoifemoloney4 Open Enthusiast Author Contributor Club 489 points Waterford, Ireland

Leigh is an Engineering Manager and passionate about process improvement and researching new ways to approach problems. He is an accredited ICF Coach and has led several Agile transformations within Red Hat.

| Follow leighgriffin Open Minded Author Contributor Club Register or Login to post a comment.

Create conditional pipelines with CEL

Fri, 07/22/2022 - 15:00
Create conditional pipelines with CEL Camilla Conte Fri, 07/22/2022 - 03:00 Register or Login to like Register or Login to like

You just followed a guide to start your Tekton pipeline (or task) when a merge request is created or updated on your GitLab project. So you configured GitLab to send merge request events as webhooks. And you deployed some Tekton components:

  • EventListener: receives webhooks from GitLab
  • Trigger: starts your Pipeline every time the EventListener receives a new webhook from GitLab
  • Pipeline: fetches the source code from GitLab and builds it

Then you notice that any event in your merge request (a new comment, a tag change) triggers the pipeline. That's not the behavior you desire. You don't need to build a comment or a tag, after all. You only want the pipeline to run when there is actual new code to build. Here's how I use Tekton's CEL Interceptor to create conditionals for my pipelines.

Have your trigger ready

I expect you have a trigger already defined. It's probably something similar to the snippet below.

The trigger's interceptor rejects anything that's not coming from a merge request. Still, the interceptor is not able to differentiate between code and non-code updates (like new comments).

apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
  name: webhook-listener-trigger
spec:
  interceptors:
   # reject any payload that's not a merge request webhook
    - name: "filter-event-types"
      ref:
        name: "gitlab"
        kind: ClusterInterceptor
      params:
        - name: eventTypes
          value:
           - "Merge Request Hook"
  bindings:
    - ref: binding
  template:
    ref: templateAdd a CEL interceptor

Here comes the cel interceptor. This interceptor filters the webhook payload using the CEL expression language. If the filter expression evaluates to true, the pipeline starts.

Here I'm checking for the object_attributes.oldrev field to exist in the JSON body of the webhook payload. If object_attributes.oldrev exists, then that means this event is about a code change. If there wasn't a code change, there's no previous revision (oldrev) to refer to.

spec:
  interceptors:
    - name: "allow-code-changes-only"
      ref:
        name: cel
        kind: ClusterInterceptor
      params:
        - name: filter
          value: >
           has(body.object_attributes.oldrev)

Add the new interceptor to your trigger. Now your trigger looks like this:

apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
  name: gitlab-listener-trigger
spec:
  interceptors:
    - name: "verify-gitlab-payload"
      ref:
        name: "gitlab"
        kind: ClusterInterceptor
      params:
        - name: eventTypes
          value:
           - "Merge Request Hook"
    - name: "allow-code-changes-only"
      ref:
        name: "cel"
        kind: ClusterInterceptor
      params:
        - name: filter
          value: >
           has(body.object_attributes.oldrev)
  bindings:
    - ref: binding
  template:
    ref: template

Deploy this new version of the trigger and enjoy the powers of automation. From now on, your pipeline only starts if there is some new code to build.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Tips

There are no limits to the conditions you can set in a CEL filter.

You may check that the merge request is currently open:

body.object_attributes.state in ['opened']

You can make sure the contributor finished their work on the code:

body.object_attributes.work_in_progress == false

You just have to concatenate multiple conditions correctly:

- name: filter
  value: >
   has(body.object_attributes.oldrev) &&
    body.object_attributes.state in ['opened'] &&
    body.object_attributes.work_in_progress == false

Check out the merge request events documentation to get inspired to write your own conditions.

You may need the CEL language definition to know how to translate your thoughts into code.

To evaluate types other than strings, you want to know the mapping between JSON and CEL types.

Control when automated builds happen in Tekton with CEL.

Kubernetes CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use the Linux fmt command to format text

Thu, 07/21/2022 - 15:00
How I use the Linux fmt command to format text Jim Hall Thu, 07/21/2022 - 03:00 Register or Login to like Register or Login to like

When I write documentation for a project, I often write the Readme file and Install instructions in plain text. I don't need to use markup languages like HTML or Markdown to describe what a project does or how to compile it. But maintaining this documentation can be a pain. If I need to update the middle of a sentence in my Readme text file, I need to reformat the text so I don't end up with a really long or short line in the middle of my other text that's otherwise formatted to 75 columns. Some editors include a feature that will automatically reformat text to fill paragraphs, but not all do. That's where the Linux fmt command comes to the rescue.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Format text with Linux fmt

The fmt command is a trivial text formatter; it collects words and fills paragraphs, but doesn't apply any other text styling such as italics or bold. It's all just plain text. With fmt, you can quickly adjust text so it's easier to read. Let's say I start with this familiar sample text:

$ cat trek.txt Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds. To seek out new life and new civilizations. To boldly go where no one has gone before!

In this sample file, lines have different lengths, and they are broken up in an odd way. You might have similar odd line breaks if you make lots of changes to a plain text file. To reformat this text, you can use the fmt command to fill the lines of the paragraph to a uniform length:

$ fmt trek.txt Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds. To seek out new life and new civilizations. To boldly go where no one has gone before!

By default, fmt will format text to 75 columns wide, but you can change that with the -w or --width option:

$ fmt -w 60 trek.txt Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds. To seek out new life and new civilizations. To boldly go where no one has gone before!Format email replies with Linux fmt

I participate in an email list where we prefer plain text emails. That makes archiving emails on the list server much easier. But the reality is not everyone sends emails in plain text. And sometimes, when I reply to those emails as plain text, my email client puts an entire paragraph on one line. That makes it difficult to "quote" a reply in an email.

Here's a simple example. When I'm replying to an email as plain text, my email client "quotes" the other person's email by adding a > character before each line. For a short message, that might look like this:

> I like the idea of the interim development builds.

A long line that doesn't get "wrapped" properly will not display correctly in my plain text email reply, because it will be just one long line with a > character at the front, like this:

> I like the idea of the interim development builds. This should be a great way to test new changes that everyone can experiment with.

To fix this, I bring up a terminal and copy and paste the quoted text into a new file. Then I use the -p or --prefix option to tell fmt what character to use as a "prefix" before each line.

$ cat > email.txt > I like the idea of the interim development builds. This should be a great way to test new changes that everyone can experiment with. ^D $ fmt -p '>' email.txt > I like the idea of the interim development builds. This should be a > great way to test new changes that everyone can experiment with.

The fmt command is a very simple text formatter, but it can do lots of useful things that help in writing and updating documentation in plain text. Explore the other options such as -c or --crown-margin to match the indentation of the first two lines of a paragraph, such as bullet lists. Also try -t or --tagged-paragraph to preserve the indentation of the first line in a paragraph, like indented paragraphs. And the -u or --uniform-spacing option to use one space between words and two spaces between sentences.

The fmt command is a trivial text formatter. Here's how I use to format text and email replies.

Image by:

Original photo by Bob Doran via Flickr. Modified by Rikki Endsley. CC BY-SA 2.0.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Build Design Thinking into your team processes

Thu, 07/21/2022 - 15:00
Build Design Thinking into your team processes Leigh Griffin Thu, 07/21/2022 - 03:00 Register or Login to like Register or Login to like

Teams require some kind of process to coordinate work and ensure that the output of many focuses on a singular goal. Within the software industry, this has taken the form of teams following a methodology such as Agile. In industries from pharmaceutical to manufacturing, Lean is the philosophy followed to ensure that a process is adhered to. The difficulty with a process is that it's prescriptive. It's designed in such a way that you stay on the tracks that it provides. If you do, you achieve the prescribed benefits and ultimately a form of success. That doesn't give much room for paradigm-level shifts in work behavior.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews

In early 2020, one such shift hit the global workplace in the form of COVID-19. Many companies self-enforced a remote strategy as people were forced to adapt to working from home. Years later, that flexibility is here to stay. This disruptor has brought new challenges in how we facilitate our process, and the ceremonies associated with it. The engagement paradigm of trying to bring people together on a problem, and to keep them attentive, productive, and happy is challenging when your interaction medium is limited to the screenspace a person is using. Engagement through a camera lens masks the environmental conditions that could also be hampering participation, from suboptimal work space conditions to barking dogs, crying kids, or noisy neighbors. When you throw in the natural distractions that working on a computer can bring (instant messaging, emails, social media, cat memes, and more) the conditions for collaboration, innovation, and problem solving tends to suffer. Combining all of those elements gives you a perfect storm, especially when you ask a human to concentrate on just one topic for a prolonged period of time.

 

. And that period of engagement extends beyond ergonomic best practices for working at computers, meaning this becomes not just a mental challenge, but a physical challenge.

[ Also read: 5 ways to better leverage remote teams ]

Team health governs a range of concerns from the happiness of team members to their social interactions with each other. Team-centric work is fundamentally based on a social contract of respecting each other, operating as a team, and looking out for one another. That's underpinned by an attempt to generate an environment where people enjoy turning up to work, or at a minimum don't dread when Monday morning comes around. If you can achieve that, and tap into what people are passionate about, then you have the means to move motivated teams towards goals.

Fun at work can often feel forced, with team-building activities, but it can help to take the engagement paradigm that is Design Thinking, and bake it into your process. This allows you to not just progress the key ceremonies and events that your processes demand, but it allows you to do so in a manner that engages people, keeps them attentive in a remote capacity, and allows them to enjoy turning up to these sessions. You can build sessions where your participants can be the stars of the show.

In our next article, we'll talk about how movie posters helped us stay organized and engaged.

Fun at work can often feel forced, with team-building activities, but it can help to take the engagement paradigm that is Design Thinking, and bake it into your process.

Image by:

opensource.com

Community management Agile What to read next Put Design Thinking into practice with the Open Practice Library This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 61 points Waterford, Ireland

Product Owner on the Community Platform Engineering team at Red Hat from Waterford, Ireland and in my spare time I'm a retired showjumper but full time 'groom' for my son who has followed in my hoof-prints - ah, foot-prints!

| Follow aoifemoloney4 Open Enthusiast Author 61 points

Experienced People and Project Manager with over 20 years of working in the information technology and services industries.

Open Enthusiast Author Register or Login to post a comment.

Put Design Thinking into practice with the Open Practice Library

Wed, 07/20/2022 - 15:00
Put Design Thinking into practice with the Open Practice Library Leigh Griffin Wed, 07/20/2022 - 03:00 Register or Login to like Register or Login to like

Design Thinking has been getting a lot of attention over the past few years as a way to enhance your problem solving, ensure learning goals are met, and increase team engagement. As a concept, it's all about problem solving, but it's designed to break down existing approaches and norms. Over the past few decades, teams have developed standardized ways of approaching problems. Agile teams, for example, take retrospectives as a means to both troubleshoot and brainstorm new ways of working. Lean has evolved a set of root cause analysis tooling and techniques to allow for the bottoming out of problems.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

A problem solving and brainstorming session needs the freedom to shift perspective away from current thinking. That thinking can be hampered by the familiarity with tooling, the team affinity for certain approaches or tools, the "evangelistic" approach to processes, and the mantra of this is how we have always done it, which is rooted in people's innate resistance to change. Design Thinking is an approach to allow people to see beyond basic human tendencies. It allows people to awaken to alternative approaches that can help uncover unmet and unspoken needs, and to bring new perspectives to the challenges at hand.

[ Also read: Build community engagement by serving up Lean Coffee ]

As humans, we process and learn in three key ways, with each person being attuned to a different learning style.

  • Aurally: listening to people speaking and engaging in discussions
  • Visually: reading and interpreting drawings and presentations
  • Kinesthetically: tactile learning, being hands-on with a problem and inspecting it using your whole body

If you assume that everyone has attained the knowledge you're trying to deliver, but you use tools that limit or exclude any of the three learning styles, you cannot be sure that the learning objectives have been met. The Design Thinking approach helps you consider all three learning styles, and helps you move the team forward in its learning journey, faster.

Open Practice Library

With such obvious benefits, Design Thinking is attracting more practitioners. People are using it with their teams, and a community of practice has formed around it. The Open Practice Library (OPL) is a curated set of community-contributed practices to allow you to introduce Design Thinking concepts to your teams to bring more powerful learning and understanding to your project planning, and more importantly, to have some fun!

[ Related read: 7 ways anyone can contribute to Open Practice Library ]

In our next article, we'll discuss the problem with processes, and how Design Thinking can help overcome it.

The Open Practice Library includes a curated set of community-contributed practices for teams to implement Design Thinking concepts.

Image by:

Opensource.com

Community management Careers Agile What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 31 points

Experienced People and Project Manager with over 20 years of working in the information technology and services industries.

Open Enthusiast Author 31 points Waterford, Ireland

Product Owner on the Community Platform Engineering team at Red Hat from Waterford, Ireland and in my spare time I'm a retired showjumper but full time 'groom' for my son who has followed in my hoof-prints - ah, foot-prints!

| Follow aoifemoloney4 Open Enthusiast Author Register or Login to post a comment.

How much JavaScript do you need to know before learning ReactJS?

Wed, 07/20/2022 - 15:00
How much JavaScript do you need to know before learning ReactJS? Sachin Samal Wed, 07/20/2022 - 03:00 1 reader likes this 1 reader likes this

React is a UI framework built on top of HTML, CSS, and JavaScript, where JavaScript (JS) is responsible for most of the logic. If you have knowledge of variables, data types, array functions, callbacks, scopes, string methods, loops, and other JS DOM manipulation-related topics, these will tremendously speed up the pace of learning ReactJS.

Your concept of modern JavaScript will dictate the pace of how soon you can get going with ReactJS. You don't need to be a JavaScript expert to start your ReactJS journey, but just as knowledge of ingredients is a must for any chef hoping to master cooking, the same is true for learning ReactJS. It's a modern JavaScript UI library, so you need to know some JavaScript. The question is, how much?

Example explanation

Suppose I'm asked to write an essay about a "cow" in English, but that I know nothing about the language. In this case, for me to successfully complete the task, I should not only have an idea about the topic but also the specified language.

Assuming that I acquire some knowledge about the topic (a cow), how can I calculate the amount of English I need to know to be able to write about the proscribed topic? What if I have to write an essay on some other complex topics in English?

It’s difficult to figure that out, isn’t it? I don’t know what things I'm going to write about the topic, but it could be anything. So to get started, I have to have a proper knowledge of the English language, but it doesn't end there.

Extreme reality

The same is true for the amount of JavaScript required before getting started with ReactJS. According to my example scenario, ReactJS is the topic "cow" while JavaScript is the English language. It's important to have a strong grasp of JavaScript to be successful in ReactJS. One is very unlikely to master ReactJS professionally without having the proper foundation of JavaScript. No matter how much knowledge I might have about the topic, I won’t be able to express myself properly if I don't know the fundamentals of the language.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java How much is enough?

In my experience, when you start your ReactJS journey, you should already be familiar with:

  • variables
  • data types
  • string methods
  • loops
  • conditionals

You should be familiar with these specifically in JavaScript. But these are just the bare minimum prerequisites. When you try to create a simple React app, you'll inevitably need to handle events. So, the concept of normal functions, function expressions, statements, arrow function, the difference between an arrow function and a regular function, and the lexical scoping of this keyword in both types of function is really important.

But the question is, what if I have to create a complex app using ReactJS?

Get inspired

Handling events, spread operators, destructuring, named imports, and default imports in JavaScript will help you understand the working mechanism of React code.

Most importantly, you must understand the core concepts behind JavaScript itself. JavaScript is asynchronous by design. Don't be surprised when code appearing at the bottom of a file executes before code at the top of the file does. Constructs like promises, callbacks, async-await, map, filter, and reduce, are the most common methods and concepts in ReactJS, especially when developing complex applications.

The main idea is to be good in JavaScript so you can reduce the complexity of your ReactJS journey.

Getting good

It's easy for me to say what you need to know, but it's something else entirely for you to go learn it. Practicing a lot of JavaScript is essential, but you might be surprised that I don't think it means you necessarily have to wait until you master it. There are certain concepts that are important beforehand, but there's a lot you can learn as you go. Part of practicing is learning, so you get started with JavaScript and even with some of the basics of React, as long as you move at a comfortable pace and understand that doing your "homework" is a requirement before you attempt anything serious.

Get started with JavaScript now

Don't bother waiting until you cover all aspects of JavaScript. That's never going to happen. If you do that, you'll get trapped in that forever-loop of learning JavaScript. And you all know how constantly evolving and rapidly changing the tech field is. If you want to start learning JavaScript, try reading Mandy Kendall's introductory article Learn JavaScript by writing a guessing game. It's a great way to get started quickly, and once you see what's possible I think you're likely to find it difficult to stop.

The main idea is to be good in JavaScript so you can reduce the complexity of your ReactJS journey.

Image by:

kris krüg

JavaScript Programming What to read next Code your first React UI app This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Turn your Python script into a command-line application

Tue, 07/19/2022 - 15:00
Turn your Python script into a command-line application Mark Meyer Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

I've written, used, and seen a lot of loose scripts in my career. They start with someone that needs to semi-automate some task. After a while, they grow. They can change hands many times in their lifetime. I've often wished for a more command-line tool-like feeling in those scripts. But how hard is it really to bump the quality level from a one-off script to a proper tool? It turns out it's not that hard in Python.

Scaffolding

In this article, I start with a little Python snippet. I'll drop it into a scaffold module, and extend it with click to accept command-line arguments.

#!/usr/bin/python

from glob import glob
from os.path import join, basename
from shutil import move
from datetime import datetime
from os import link, unlink

LATEST = 'latest.txt'
ARCHIVE = '/Users/mark/archive'
INCOMING = '/Users/mark/incoming'
TPATTERN = '%Y-%m-%d'

def transmogrify_filename(fname):
    bname = basename(fname)
    ts = datetime.now().strftime(TPATTERN)
    return '-'.join([ts, bname])

def set_current_latest(file):
    latest = join(ARCHIVE, LATEST)
    try:
        unlink(latest)
    except:
        pass
    link(file, latest)

def rotate_file(source):
    target = join(ARCHIVE, transmogrify_filename(source))
    move(source, target)
    set_current_latest(target)

def rotoscope():
    file_no = 0
    folder = join(INCOMING, '*.txt')
    print(f'Looking in {INCOMING}')
    for file in glob(folder):
        rotate_file(file)
        print(f'Rotated: {file}')
        file_no = file_no + 1
    print(f'Total files rotated: {file_no}')

if __name__ == '__main__':
    print('This is rotoscope 0.4.1. Bleep, bloop.')
    rotoscope()

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles

All non-inline code samples in this article refer to a specific version of the code you can find at https://codeberg.org/ofosos/rotoscope. Every commit in that repo describes some meaningful step in the course of this how-to article.

This snippet does a few things:

  • Check whether there are any text files in the path specified in INCOMING
  • If it exists, it creates a new filename with the current timestamp and moves the file to ARCHIVE
  • Delete the current ARCHIVE/latest.txt link and create a new one pointing to the file just added

As an example, this is pretty small, but it gives you an idea of the process.

Create an application with pyscaffold

First, you need to install the scaffold, click, and tox Python modules.

$ python3 -m pip install scaffold click tox

After installing scaffold, change to the directory where the example rotoscope project resides, and then execute the following command:

$ putup rotoscope -p rotoscope \
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown

Pyscaffold overwrote my README.md, so restore it from Git:

$ git checkout README.md

Pyscaffold set up a complete sample project in the docs hierarchy, which I won't cover here but feel free to explore it later. Besides that, Pyscaffold can also provide you with continuous integration (CI) templates in your project.

  • packaging: Your project is now PyPi enabled, so you can upload it to a repo and install it from there.
  • documentation: Your project now has a complete docs folder hierarchy, based on Sphinx and including a readthedocs.org builder.
  • testing: Your project can now be used with the tox test runner, and the tests folder contains all necessary boilerplate to run pytest-based tests.
  • dependency management: Both the packaging and test infrastructure need a way to manage dependencies. The setup.cfg file solves this and includes dependencies.
  • pre-commit hook: This includes the Python source formatter "black" and the "flake8" Python style checker.

Take a look into the tests folder and run the tox command in the project directory. It immediately outputs an error. The packaging infrastructure cannot find your package.

Now create a Git tag (for instance, v0.2) that the tool recognizes as an installable version. Before committing the changes, take a pass through the auto-generated setup.cfg and edit it to suit your use case. For this example, you might adapt the LICENSE and project descriptions. Add those changes to Git's staging area, I have to commit them with the pre-commit hook disabled. Otherwise, I'd run into an error because flake8, Python style checker, complains about lousy style.

$ PRE_COMMIT_ALLOW_NO_CONFIG=1 git commit

It would also be nice to have an entry point into this script that users can call from the command line. Right now, you can only run it by finding the .py file and executing it manually. Fortunately, Python's packaging infrastructure has a nice "canned" way to make this an easy configuration change. Add the following to the options.entry_points section of your setup.cfg:

console_scripts =
    roto = rotoscope.rotoscope:rotoscope

This change creates a shell command called roto, which you can use to call the rotoscope script. Once you install rotoscope with pip, you can use the roto command.

That's that. You have all the packaging, testing, and documentation setup for free from Pyscaffold. You also got a pre-commit hook to keep you (mostly) honest.

CLI tooling

Right now, there are values hardcoded into the script that would be more convenient as command arguments. The INCOMING constant, for instance, would be better as a command-line parameter.

First, import the click library. Annotate the rotoscope() method with the command annotation provided by Click, and add an argument that Click passes to the rotoscope function. Click provides a set of validators, so add a path validator to the argument. Click also conveniently uses the function's here-string as part of the command-line documentation. So you end up with the following method signature:

@click.command()
@click.argument('incoming', type=click.Path(exists=True))
def rotoscope(incoming):
    """
    Rotoscope 0.4 - Bleep, blooop.
    Simple sample that move files.
    """

The main section calls rotoscope(), which is now a Click command. It doesn't need to pass any parameters.

Options can get filled in automatically by environment variables, too. For instance, change the ARCHIVE constant to an option:

@click.option('archive', '--archive', default='/Users/mark/archive', envvar='ROTO_ARCHIVE', type=click.Path())

The same path validator applies again. This time, let Click fill in the environment variable, defaulting to the old constant's value if nothing's provided by the environment.

Click can do many more things. It has colored console output, prompts, and subcommands that allow you to build complex CLI tools. Browsing through the Click documentation reveals more of its power.

Now add some tests to the mix.

Testing

Click has some advice on running end-to-end tests using the CLI runner. You can use this to implement a complete test (in the sample project, the tests are in the tests folder.)

The test sits in a method of a testing class. Most of the conventions follow what I'd use in any other Python project very closely, but there are a few specifics because rotoscope uses click. In the test method, I create a CliRunner. The test uses this to run the command in an isolated file system. Then the test creates incoming and archive directories and a dummy incoming/test.txt file within the isolated file system. Then it invokes the CliRunner just like you'd invoke a command-line application. After the run completes, the test examines the isolated filesystem and verifies that incoming is empty, and that archive contains two files (the latest link and the archived file.)

from os import listdir, mkdir
from click.testing import CliRunner
from rotoscope.rotoscope import rotoscope

class TestRotoscope:
    def test_roto_good(self, tmp_path):
        runner = CliRunner()

        with runner.isolated_filesystem(temp_dir=tmp_path) as td:
            mkdir("incoming")
            mkdir("archive")
            with open("incoming/test.txt", "w") as f:
                f.write("hello")

            result = runner.invoke(rotoscope, ["incoming", "--archive", "archive"])
            assert result.exit_code == 0

            print(td)
            incoming_f = listdir("incoming")
            archive_f = listdir("archive")
            assert len(incoming_f) == 0
            assert len(archive_f) == 2

To execute these tests on my console, run tox in the project's root directory.

During implementing the tests, I found a bug in my code. When I did the Click conversion, rotoscope just unlinked the latest file, whether it was present or not. The tests started with a fresh file system (not my home folder) and promptly failed. I can prevent this kind of bug by running in a nicely isolated and automated test environment. That'll avoid a lot of "it works on my machine" problems.

Scaffolding and modules

This completes our tour of advanced things you can do with scaffold and click. There are many possibilities to level up a casual Python script, and make even your simple utilities into full-fledged CLI tools.

With scaffold and click in Python, you can level up even a simple utility into a full-fledged command-line interface tool.

Image by:

Opensource.com

Python Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Code your first React UI app

Tue, 07/19/2022 - 15:00
Code your first React UI app Jessica Cherry Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

Who wants to create their first UI app? I do, and if you're reading this article, I assume you do, too. In today's example, I'll use some JavaScript and the API with Express I demonstrated in my previous article. First, let me explain some of the tech you're about to use.

What is React?

React is a JavaScript library for building a user interface (UI). However, you need more than just the UI library for a functional UI. Here are the important components of the JavaScript web app you're about to create:

  • npx: This package is for executing npm packages.
  • axios: A promise-based HTTP client for the browser and node.js. A promise is a value that an API endpoint will provide.
  • http-proxy-middleware: Configures proxy middleware with ease. A proxy is middleware that helps deal with messaging back and forth from the application endpoint to the requester.
Preconfiguration

If you haven't already, look at my previous article. You'll use that code as part of this React app. In this case, you'll add a service to use as part of the app. As part of this application, you have to use the npx package to create the new folder structure and application:

$ npx create-react-app count-ui
npx: installed 67 in 7.295s

Creating a new React app in /Users/cherrybomb/count-ui.

Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template...
[...]
Installing template dependencies using npm...
+ @testing-library/jest-dom@5.16.4
+ @testing-library/user-event@13.5.0
+ web-vitals@2.1.4
+ @testing-library/react@13.3.0
added 52 packages from 109 contributors in 9.858s
[...]
Success! Created count-ui at /Users/cherrybomb/count-ui
[...]
We suggest that you begin by typing:

  cd count-ui
  npm start

As you can see, the npx command has created a new template with a folder structure, an awesome README file, and a Git repository. Here's the structure:

$ cd count-ui/
/Users/cherrybomb/count-ui

$ ls -A -1
.git
.gitignore
README.md
node_modules
package-lock.json
package.json
public
src

This process also initialized the Git repo and set the branch to master, which is a pretty cool trick. Next, install the npm packages:

$ npm install axios http-proxy-middleware
[...]
npm WARN @apideck/better-ajv-errors@0.3.4 requires a peer of ajv@>=8 but none is installed. You must install peer dependencies yourself.
+ http-proxy-middleware@2.0.6
+ axios@0.27.2
added 2 packages from 2 contributors, updated 1 package and audited 1449 packages in 5.886s

Now that those are set up, add your services, and main.js file:

$ mkdir src/services
src/services

$ touch src/services/main.js

Preconfiguration is now complete, so you can now work on coding.

Code a UI from start to finish

Now that you have everything preconfigured, you can put together the service for your application. Add the following code to the main.js file:

import axios from 'axios';
const baseURL = 'http://localhost:5001/api';
export const get = async () => await axios.get(`${baseURL}/`);
export const increment = async () => await axios.post(`${baseURL}/`);
export default {
    get,
    increment
}

This process creates a JavaScript file that interacts with the API you created in my previous article.

Set up the proxy

Next, you must set up a proxy middleware by creating a new file in the src directory.

$ touch src/setupProxy.js

Configure the proxy with this code in setupProxy.js:

const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
  app.use(
    '/api',
    createProxyMiddleware({
      target: 'http://localhost:5000',
      changeOrigin: true,
    })
  );
};

In this code, the app.use function specifies the service to use as /api when connecting to the existing API project. However, nothing defines api in the code. This is where a proxy comes in. With a proxy, you can define the api function on the proxy level to interact with your Express API. This middleware registers requests between both applications because the UI and API use the same host with different ports. They require a proxy to transfer internal traffic.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java JavaScript imports

In your base src directory, you see that the original template created an App.js, and you must add main.js (in the services directory) to your imports in the App.js file. You also need to import React on the very first line, as it is external to the project:

import React from 'react'
import main from './services/main';Add the rendering function

Now that you have your imports, you must add a render function. In the App() function of App.js, add the first section of definitions for react and count before the return section. This section gets the count from the API and puts it on the screen. In the return function, a button provides the ability to increment the count.

function App() {
const [count, setCount] = React.useState(0);
React.useEffect(()=>{
  async function fetchCount(){
    const newCount = (await main.get()).data.count;
    setCount(newCount);
  }
  fetchCount();
}, [setCount]);
return (  
    <div className="App">
      <header className="App-header">
        <h4>
          {count}
        h4>
        <button onClick={async ()=>{
          setCount((await main.increment()).data.count);
        }}>
          Increment
        button>
      header>
    div>
  );
}

To start and test the app, run npm run start. You should see the output below. Before running the application, confirm your API is running from the Express app by running node ./src/index.js.

$ npm run start
> count-ui@0.1.0 start /Users/cherrybomb/count-ui
> react-scripts start

[HPM] Proxy created: /  -> http://localhost:5000
(node:71729) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:71729) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
Starting the development server...

Once everything is running, open your browser to localhost:5000 to see the front end has a nice, admittedly minimal, page with a button:

Image by:

(Jessica Cherry, CC BY-SA 4.0)

What happens when you press the button? (Or, in my case, press the button several times.)

Image by:

(Jessica Cherry, CC BY-SA 4.0)

The counter goes up!

Congratulations, you now have a React app that uses your new API.

Web apps and APIs

This exercise is a great way to learn how to make a back end and a front end work together. It's noteworthy to say that if you're using two hosts, you don't need the proxy section of this article. Either way, JavaScript and React are a quick, templated way to get a front end up and running with minimal steps. Hopefully, you enjoyed this walk-through. Tell us your thoughts on learning how to code in JavaScript.

Learn to make back-end and front-end development work together with this JavaScript tutorial.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

JavaScript Programming What to read next Create a JavaScript API in 6 minutes This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Event-driven architecture explained in a coloring book

Tue, 07/19/2022 - 15:00
Event-driven architecture explained in a coloring book Seth Kenlon Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

"Explain it to me like I'm five."

When you want someone to get to the point as efficiently and as clearly as possible, that's what you say. Following that logic, you might be compelled to ponder the most powerful tool the average, everyday 5-year old wields: coloring books. What better way than a coloring book to transform a droll slideshow presentation into a fun and educational journey?

That's what artists Máirín Duffy and Madeline Peck thought, anyway, and it has turned out to be accurate. In the past, Máirín has helped produce five open source coloring books to help explain advanced topics including SELinux, Containers, Ansible, and more. It's a fun and easy way to learn about emerging technology, and you can either color your lessons yourself or hand it over to a resident specialist (an actual 5-year old) for project completion.

The latest coloring book in the series is all about event driven architecture (EDA). As with all the previous coloring books, this one's not only free to download, but it's also open source. You can download the sources and assemble it yourself, or learn from the files so you can build your own about topics important to you.

Event-driven architecture is no small topic, so I sat down with Máirín and Madeline to discover how and why they took on the challenge.

Q: Presumably, you don't spend your days developing KNative serverless applications and pipelines. How do you learn so much about such a complex topic?

Máirín Duffy: I wrote the script for the coloring book. I have a lot of experience with OS-level technology, and I have experience working in teams that deploy applications as a service, but I do not have as much experience working with running and managing Kubernetes directly. And the concept of "serverless" was one I only knew of in passing.

Our colleague Kamesh Sampath gave a presentation he called Knative and the Three Dwarves.
That gave us the idea to relate our story to Snow White. In fact, we use material from Kamesh's talk to serve as the basic scope of the technologies and technical scenarios we wanted to talk about.
All of the coloring books use an analogy of some form to help readers new to the technology relate to it using concepts they are likely to understand already or be familiar with.

For the EDA coloring book, we used the familiar fairy tale of Snow White and the Seven Dwarves and the analogy of running a bakery to explain the concepts of what it means to be serverless, and what the specific Kubernetes serverless components Tekton, Serve Knative, and Event Knative are, and what they do.

In preparing to write a script for the book, I watched Kamesh's presentation, wrote out the questions I had, and met with Kamesh. He is a very gifted teacher and was able to answer all of my questions and help me feel comfortable about the subject matter. I formed an informal technical review board for the book. We have access to a lot of amazingly smart technology experts through Fedora and Red Hat, and they were excited about having a book like this available, so we got quite a few volunteers.

I bounced ideas off of them. I spent a lot of time pestering Langdon White, and we narrowed down on the concept of Snow White running a bakery and the scenarios of demonstrating auto-scaling (scaling the production of different baked goodies up and down based on the holidays), self-healing based on events (ordering new eggs when the supply is low), shutting down an app that isn't being used and spinning it up on demand (the cupcake decorator scenario), rolling back issues in production (the poisoned apple detector.)

I wrote up an initial draft, and then the technical review board reviewed it and provided a ton of suggestions and tweaks. We did another round, and I finalized the script so that Madeline could start illustrating.

Madeline Peck: That's where I come in. I was lucky: I was presented with the finished version of the script, so the coloring book taught me what I needed to know. The great technical writers who helped give feedback on the script and visuals correlating were a great help with this admittedly complex topic.

Máirín Duffy: And as Madeline completed storyboards, and then the initial draft of the fully illustrated book, we had a couple more technical board reviews to make sure it still all made sense.

Q: That's a lot more work than I realized. So how long does it take to create a coloring book?

Madeline Peck: This one took a lot longer because it was the first coloring book I had worked on. Mo has been churning them out for some time now, and has a great grasp on all the open source programs like Inkscape and Scribus that we use, as well as the connections and knowledge for topics that can be expanded upon in a simple but informative manner. This book started when I was an intern, and it's taught me a lot about each step in the process, as well as all the ways open source matters for projects like these.

Q: What tools do you use when you draw?

Madeline Peck: When I draw digitally, I use variations of different ink pens. But on paper, traditionally I use a color erase red pencil for sketching, a Pigma Micron 01 pen for inking (because it's water proof), and occasionally I add color with watercolors from Mijello.

Q: I don't work with physical materials often, and I don't have a kid to do the coloring in for me, but I'm enjoying using this as a digital coloring book. I've imported pages into Krita and it's given me the opportunity to experiment with different brushes and color mixing techniques.

Madeline Peck: I think Krita is a great coloring application! There's a great variety of brushes and tools. I used Krita for all the primary sketching for the frames in the coloring book. If people don't know, when you import PNGs into programs like Krita, you can set the layer mode with the image to multiply instead of normal. Then you can add a layer below it, and it's just like coloring in below the lines without the white background.

Q: Is it harder to draw things without considering color and shading? Does it feel incomplete to you?

Madeline Peck: I don't think so! There's a lot of gorgeous art in the world where artists only rely on line work. The weight of the lines, the way they interact — it's just another technique. It doesn't feel incomplete because I know there's going to be lots of people who are going to share pages of the book colored in their own way, which is really exciting!

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

Q: Who's this really meant for? Can people actually learn about going serverless from a coloring book?

Máirín Duffy: Another good question. We started this whole "coloring books to explain technology" thing when Dan Walsh came into my cube at Red Hat Westford almost 10 years ago and asked if I could draw him some illustrations for his SELinux dogfood analogy. He had come up with this analogy having had to explain how SELinux concepts worked repeatedly. He also found it to be an effective analogy in many presentations.

That coloring book was super basic compared to the EDA coloring book, but the bones are the same — making complex technology concepts less intimidating and more approachable with simple analogies and narrative. We have gotten overwhelming feedback over a long period of time that these coloring books have been very helpful in teaching about the technology. I've had customers tell me that they've been able to use specific coloring books to help explain the technology to their managers, and that they are a really non-intimidating way to get a good initial understanding.

Madeline Peck: I agree. The coloring books are meant for a variety of readers, with a wide range of prior knowledge on the subject. They can be used for people who have friends and family who work on serverless applications, for those working on the actual teams, or people who work adjacent to those developers.

Máirín Duffy: They also make a great handout on a conference expo floor, at talks, and even virtually as PDFs. Even if EDA isn't your thing, you can pick it up and your kids can have fun coloring the characters. I really do hope people can read this book and better understand what serverless is and that it could spark an interest for them to look more in depth into serverless and EDA processes.

Get your copy

I love that there are free and open source coloring books that appeal to both kids needing something fun to color in, and the older crowd looking for clear and simple explanations of complex tech topics.

A lot of creativity goes into making these coloring books, but as with most open source endeavours, it inspires yet more creativity once it's in the hands of users.

Grab your copy of the Event-driven Architecture coloring book today! Download the PDF directly here

Event-driven architecture is no small topic. A coloring book is a perfect way to explain its complexity in a friendly manner. Download this coloring book about event-driven architecture

Image by:

Opensource.com

Open Studio What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 896 points Boston, Massachusetts USA

Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape (http://inkscape.org).

| Follow mairin Open Source Evangelist People's Choice Award Author Founding Member Register or Login to post a comment.

How I configure a DHCP server on my personal network

Mon, 07/18/2022 - 15:00
How I configure a DHCP server on my personal network David Both Mon, 07/18/2022 - 03:00 Register or Login to like Register or Login to like

The Dynamic Host Configuration Protocol (DHCP) provides a centralized and automated method for configuring the network attributes of hosts when they connect to the network. The DHCP server assigns IP addresses to hosts, along with configuration information such as DNS servers, the domain name used for DNS searches, the default gateway, an NTP (Network Time Protocol) server, a server from which a network boot can be performed if necessary, and more. DHCP eliminates the need to configure each network host individually.

DHCP is also useful for configuring laptops, mobile phones, tablets, and other devices which might connect as unknown guests. This configuration is typical for WiFi access in public places. However, DHCP offers even more advantages when used in a closed, private network to manage static IP address assignments for known hosts using the central DHCP database.

The DHCP server uses a database of information created by the sysadmin. This database is entirely contained in the /etc/dhcp/dhcpd.conf configuration file. DHCPD stands for DHCP Daemon, which is the background server process. Like all well-designed Linux configuration files, it is a simple ASCII plain text file. This structure means that it is open and knowable. It can be examined by standard, simple text manipulation tools like cat and grep, and be modified by any text editor such as EMACS or Vim, or a stream editor such as sed.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

The DHCP client is always installed on Linux hosts—at least on Red Hat-based distros and all other distros I have tried—because of the high probability that they will be connected to a network using DHCP and not with a static configuration.

When a host configured for DHCP boots or its NIC is activated, it sends a broadcast request to the network asking for a DHCP server to respond. The client and the server engage in a bit of conversation, and the server sends the configuration data to the client, which uses it to configure its network connection. Hosts may have multiple NICs connected to different networks. Any or all may be configured using DHCP or a static configuration. I will keep the setup for this article simple with only a few hosts—my own personal network.

Network description

This article uses my own network for illustration. This is much more interesting and realistic than using a set of virtual machines on a virtual network. Each host on my network has the Fedora 36 Xfce spin installed. Due to my desire to experiment, my network is smaller but contains more complex network configurations than might be found in a standard home or small business network.

Before setting up DHCP, I created a network address map like the one shown in Figure 1. This diagram includes MAC addresses, IP addresses, and NIC names for each host. The map enabled me to visualize my network's logical structure and determine which hosts needed DHCP configuration and which needed static IP configuration.

NIC

MAC

Static/DHCP

IP Address

Comments

wally1

 

 

Primary firewall and router

eno1

04:d9:f5:1c:d5:c5

Static

N/A

Disabled due to errors

enp1s0

84:16:f9:03:e9:89

Static

192.168.10.1/24

SSID Linux Boy

enp2s0

84:16:f9:03:fd:85

Static

192.168.0.254/24

Inside network

enp4s0

84:16:f9:04:44:03

Static

45.20.209.41/29

WAN connection

 

 

 

 

 

yorktown

 

 

Main server

enp0s31f6

e0:d5:5e:a2:de:a4

Static

192.168.0.52/24

DHCP, NTP, DNS, HTTP, +

 

 

 

 

 

david

 

 

Main workstation

enp0s31f6

b0:6e:bf:3a:43:1f

DHCP

192.168.0.1/24

 

 

 

 

 

 

bunkerhill

 

 

Testing workstation

eno1

2c:f0:5d:26:c2:09

DHCP

192.168.0.9/24

 

 

 

 

 

 

enterprise

 

 

Workstation

eno1

30:9c:23:e9:a4:e6

DHCP

192.168.0.2/24

 

 

 

 

 

 

essex

 

 

Testing workstation

eno1

e0:69:95:45:c4:cd

DHCP

192.168.0.6/24

 

intrepid

 

 

Testing workstation

enp0s25

00:1e:4f:df:3a:d7

DHCP

192.168.0.5/24

 

 

 

 

 

 

wasp

 

 

Testing workstation

eno1

e8:40:f2:3d:0e:a8

DHCP

192.168.0.8/24

 

 

 

 

 

 

hornet

 

 

Testing workstation

enp0s25

e0:69:95:3c:07:37

DHCP

192.168.0.07/24

 

 

 

 

 

 

voyager

 

 

Laptop

enp111s0

80:fa:5b:63:37:88

DHCP

192.168.0.201/24

 

Figure 1: Network Address Map

The yorktown server hosts the DHCP service and the rest of my server services. Host wally is my firewall and router. The hosts yorktown and wally both use static network configurations and the rest use DHCP configuration, as shown in Figure 1.

Install the DHCP server

First, I checked the DHP installation status and then installed the DHCP server, as shown in Figure 2.

[root@yorktown ~]# dnf list installed dhcp*
Installed Packages
dhcp-client.x86_64     12:4.3.6-28.fc29           @anaconda
dhcp-common.noarch     12:4.3.6-28.fc29           @anaconda
dhcp-libs.x86_64       12:4.3.6-28.fc29           @anaconda

[root@yorktown ~]#

Figure 2: Check which DHCP packages are installed on my server.

The DHCP server is not installed by default, so I added it myself. This task must be performed as root. The result shows the DHCP client has been installed along with libraries and supporting files common to the client, server, and possibly the DHCP development packages. The DHCP server is not installed, so I installed it using the command in Figure 3.

[yorktown ~]# dnf install -y dhcp-server
Last metadata expiration check: 2:39:06 ago on Wed 26 Dec 2018 12:19:46 PM EST.
Dependencies resolved.
=================================================================================================
 Package                Arch              Version                        Repository         Size
=================================================================================================
Installing:
 dhcp-server            x86_64            12:4.3.6-28.fc29               fedora            431 k
[...]
Installed:
  dhcp-server-12:4.3.6-28.fc29.x86_64                                                            

Complete!

Figure 3: Installing the DHCP server package.

That was easy, and no reboot of the server was required.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles Configure the DHCP server

With the DHCP server installed, the next step is to configure the server. Having more than one DHCP server on the same network can cause problems because one would never know which DHCP server is providing the network configuration data to the client. However, a single DHCP server on one host can listen to multiple networks and provide configuration data to clients on more than one network.

DHCP can provide DNS names for the gateway and other servers. For example, the NTP server could use that server's hostname (NTP1) instead of the IP address. Most of the time, this works well, but this configuration might cause problems if the DNS name services server were to be disabled or my own server does not exist.

The IP addresses specified in Figure 1 are the ones that DHCP will assign to the hosts on my internal network. I have arbitrarily chosen these IP addresses for my network.

The details like the values for hostnames, MAC addresses, and IP addresses will be different for your network, but the basic requirements will be the same.

The dhcpd.conf file

As root, you can look at the existing dhcpd.conf file, which is non-functional when first installed. Make /etc/dhcp the PWD and then cat the dhcpd.conf file to view the contents. There is not much in the file, but it does point to an example file named /usr/share/doc/dhcp-server/dhcpd.conf.example that you can read to understand the main components and syntax of the dhcpd.conf file. I strongly suggest you read this example file.

I started with a previous version of the example file many years ago when I first decided to move to DHCP configuration for my network. The comment I added in this section indicates that it was probably based on the Fedora 18 version of dhcpd.conf and my file is still based on that older file. I have left many of the original comments and commented out the default settings in my final file. Since this file is intended as a guide and the basis for a working dhcpd.conf configuration, I decided to leave as much intact as possible in case I needed that information later.

The dhcpd.conf(5) man page also has some excellent descriptions of the various configuration statements that are likely to be needed by a DHCP server.

There are two major sections in any dhcpd.conf file. The global section contains settings for all subnets for which this server provides DHCP services. The second section is the subnet declaration. You can use multiple subnet declarations if this server provides DHCP services for multiple networks.

Syntax

The dhcpd service is very strict in its interpretation of the dhcpd.conf file. Each subnet and each host declared within each subnet must begin and end with curly braces ({}), and all statements must end in a semicolon (;). A missing curly brace has caused me much angst and gnashing of teeth more than once in the past. The curly braces for the subnet declaration also surround the host declaration because the host declarations need to be inside the subnet declaration.

The global section

This global section, shown in Figure 4, contains global configuration items common to the subnets that DHCP serves. I have only a single subnet, but I still placed these statements in the global section because they are likely to be the same for all subnets. If they were to differ for a given subnet, creating a statement with different values in the subnet declaration overrides the global declaration.

Since I only have one network, I have kept the option declarations found in this section of the sample file because I had no reason to change or delete them.

# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.sample
#   see dhcpd.conf(5) man page
#
#
# Changes based on the sample dhcpd.conf for Fedora 18.
# option definitions common to all supported networks...
# option domain-name "example.org";
# option domain-name-servers ns1.example.org, ns2.example.org;
#
# All networks get the default lease times
default-lease-time 7200;        # 2 hours
max-lease-time 14400;           # 4 hours

# Use this to enable / disable dynamic dns updates globally.
ddns-update-style none;
#
# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;
#
# ignore client-updates;
# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;
#

Figure 4: The global section of the dhcpd.conf file.

I made three changes in this section from that of the original file:

  • Default lease time:I set the default lease times in seconds. This setting determines how frequently the client hosts must refresh the lease on the IP address. Shorter times are good if clients connect and disconnect frequently. The default lease time of 10 minutes is pretty short, so I set it for two hours. I also changed the maximum lease time from two hours to four hours.
  • Dynamic DNS: I disabled dynamic DNS because I don't use that in my network.
  • Authoritative server: I specified that this is the authoritative DHCP server for my network.
The subnet section

The subnet section of the dhcpd.conf file contains two subsections. The first has the values common to all hosts in the defined subnet. The host declaration subsection includes declarations for all hosts specifically managed by DHCP.

The common part of the subnet section

This common subsection of the subnet section, shown in Figure 5, sets numerous common values for all of the hosts declared in the host subsection of this subnet. I define the subnet in the first line as type C range 192.168.0.0 in the old classful notation, with a subnet mask of 255.255.255.0. This translates to 192.168.0.0/24 in the Classless Inter-Domain Routing (CIDR) network notation.

I don't see any indication in any documentation that the dhcpd.conf file can use CIDR notation for IPv4 at this time. The dhcpd.conf man page indicates that you can use CIDR notation for IPv6.

This subsection specifies the router IP address and netmask, the domain name, and the DNS domain search name. The domain search name is used when performing searches where no domain name is specified in the query. When a command such as ping essex is specified, the DNS search is performed for essex.both.org instead.

Since I don't use NIS domain names, I commented out that option.

This section also provides a list of DNS servers to the clients. Clients search these servers in the order they are listed. I use the Google DNS servers as the backup to my internal DNS server, partly because I registered my domain names with Google Domains.

subnet 192.168.0.0 netmask 255.255.255.0 {

# --- default gateway
        option routers                  192.168.0.254;
        option subnet-mask              255.255.255.0;

#       option nis-domain               "both.org";
        option domain-name              "both.org";
        option domain-search            "both.org";
        option domain-name-servers      192.168.0.52, 8.8.8.8, 8.8.4.4;

        option time-offset              -18000; # Eastern Standard Time
        option ntp-servers              192.168.0.52;
#       option netbios-name-servers     192.168.0.1;
# --- Selects point-to-point node (default is hybrid). Don't change this unless
# -- you understand Netbios very well
#       option netbios-node-type 2;

################################################################################
# Dynamic DHCP allocation range for otherwise unknown hosts                    #
################################################################################
        range dynamic-bootp 192.168.0.220 192.168.0.229;
#       default-lease-time 21600;
#       max-lease-time 43200;

Figure 5: The common subsection of the subnet section for my network.

I specified my local Network Time Protocol (NTP) server and the offset from GMT. This service synchronizes the time on all of my hosts.

Configuring the network settings for guest hosts such as laptops and other mobile devices is also possible with DHCP. I have no information (such as the MAC address) for these computers but must assign an IP address anyway. In most cases, the guest hosts must trust the DHCP service. I dislike having guests on my network, so I usually relegate guest hosts to a second network subnet. This approach protects my primary network because the guest hosts have no access to it.

The last active line in this part of my dhcpd.conf file specifies a small range of IP addresses for devices that might plug into my wired network. For example, I plug laptops and desktop systems into my network when working on them after I have determined that they are not infected—usually only when I have wiped the hard drives and installed Linux. I never, ever connect a Windows computer directly to my network.

The original sample file used different lease times for this first subnet than I specified in the global section, so I have commented them out.

The host declaration part of the subnet section

The subnet section shown in Figure 6 is where the individual hosts are declared. Each host requires a name, the MAC address of its NIC, and the fixed address it will always use.

I use comments in this host declaration subsection to help define and document the address structure for my network. I also use comments in my DNS zone file to record the same information.

################################################################################
# The range from 192.168.0.1 - 20 is for my personal hosts and workstations.   #
################################################################################
        # david
        host david {
                hardware ethernet b0:6e:bf:3a:43:1f;
                fixed-address 192.168.0.1;
        }
        # bunkerhill
        host alice {
                hardware ethernet 30:9C:23:E9:A4:E6;
                fixed-address 192.168.0.2;
        }
        # intrepid
        host intrepid {
                hardware ethernet 00:1e:4f:df:3a:d7;
                fixed-address 192.168.0.5;
        }
        # essex
        host essex {
                hardware ethernet E0:69:95:45:C4:CD;
                fixed-address 192.168.0.6;
        }
        # Hornet
        host hornet {
                hardware ethernet e0:69:95:3c:07:37;
                fixed-address 192.168.0.7;
        }
        # Wasp
        host wasp {
                hardware ethernet e8:40:f2:3d:0e:a8;
                fixed-address 192.168.0.8;
        }
        # bunkerhill
        host bunkerhill {
                hardware ethernet 2c:f0:5d:26:c2:09;
                fixed-address 192.168.0.9;
        }


################################################################################
# IP Addresses between 192.168.0.50 and 192.168.0.59 are for physical servers  #
# which always use static IP addressing.                                       #
################################################################################

################################################################################
# The range from 192.168.0.70 - 80 is for network printers.                    #
################################################################################
        host brother1 {
                hardware ethernet 30:05:5C:71:F7:7C;
                fixed-address 192.168.0.70;
        }

################################################################################
# The range from 192.168.0.91 - 100 is for various hosts under test            #
################################################################################
        host test1 {
                hardware ethernet 00:1E:4F:B1:EB:78;
                fixed-address 192.168.0.91;
        }
        host admin {
                hardware ethernet 00:22:4d:a6:5c:1b;
                fixed-address 192.168.0.92;
        }
################################################################################
# The range from 192.168.0.100 to 192.168.0.150 is for most virtual machines.  #
################################################################################
        host testvm1 {
                hardware ethernet 08:00:27:7B:A7:0C;
                fixed-address 192.168.0.101;
        }
        host testvm2 {
                hardware ethernet 08:00:27:BE:E1:02;
                fixed-address 192.168.0.102;
        }
        host fedora35vm {
                hardware ethernet 08:00:27:A8:E7:4F;
                fixed-address 192.168.0.135;
        }
        host fedora36vm {
                hardware ethernet 08:00:27:07:CD:FE;
                fixed-address 192.168.0.136;
        }

################################################################################
# The range from 192.168.0.160 - 192.168.0.179 is reserved                     #
################################################################################

################################################################################
################################################################################
################################################################################
# The range from 192.168.0.180 to 192.168.0.189 is for virtual machines used   #
# in book research. These addresses usually connect to a second or third NIC   #
# for those hosts to provide a back-door access.                               #
################################################################################
################################################################################
################################################################################
        # Adapter 2
        host studentvm1 {
                hardware ethernet 08:00:27:C4:6E:06;
                fixed-address 192.168.0.181;
        }
        # Adapter 2
        host studentvm2 {
                hardware ethernet 08:00:27:9F:67:CB;
                fixed-address 192.168.0.182;
        }

################################################################################
# The range from 192.168.190 - 199 is for windows and other strange stuff      #
################################################################################
        # Windows10 VM
        host win10 {
                hardware ethernet 08:00:27:8C:79:E8;
                fixed-address 192.168.0.190;
        }
################################################################################
# The range from 192.168.0.200 - 209 is for mobile and miscellaneous devices   #
################################################################################
        # voyager (System76 Oryx Pro 4)
        host voyager {
                hardware ethernet 80:fa:5b:63:37:88;
                fixed-address 192.168.0.201;
        }
        # voyager2  (System76 Oryx Pro 6)
        host voyager2 {
                hardware ethernet 80:fa:5b:8d:c6:75;
                fixed-address 192.168.0.202;
        }
}

Figure 6: The host declaration section of the dhcpd.conf file.

Different option declarations can be made for any subnet or any host within a subnet. For example, one subnet may specify a different router than the rest of the subnets, or one host may use a different router than the other hosts in a subnet.

To activate the new DHCP configuration, I started, enabled, and verified the DHCP service, as seen in Figure 7.

[yorktown ~]# systemctl start dhcpd

[yorktown ~]# systemctl enable dhcpd
Created symlink /etc/systemd/system/multi-user.target.wants/dhcpd.service → /usr/lib/systemd/system/dhcpd.service.

[yorktown ~]# systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
     Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/dhcpd.service.d
             └─override.conf
     Active: active (running) since Sun 2022-06-26 15:57:12 EDT; 11s ago
       Docs: man:dhcpd(8)
             man:dhcpd.conf(5)
    Process: 1347205 ExecStartPre=/bin/sleep 60 (code=exited, status=0/SUCCESS)
   Main PID: 1347220 (dhcpd)
     Status: "Dispatching packets..."
      Tasks: 1 (limit: 38318)
     Memory: 4.9M
        CPU: 15ms
     CGroup: /system.slice/dhcpd.service
             └─ 1347220 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid

Jun 26 15:57:12 yorktown.both.org dhcpd[1347220]: Wrote 10 leases to leases file.
Jun 26 15:57:12 yorktown.both.org dhcpd[1347220]: Listening on LPF/enp0s31f6/e0:d5:5e:a2:de:a4/192.168.0.0/24
Jun 26 15:57:12 yorktown.both.org dhcpd[1347220]: Sending on   LPF/enp0s31f6/e0:d5:5e:a2:de:a4/192.168.0.0/24
Jun 26 15:57:12 yorktown.both.org dhcpd[1347220]: Sending on   Socket/fallback/fallback-net
Jun 26 15:57:12 yorktown.both.org dhcpd[1347220]: Server starting service.
Jun 26 15:57:12 yorktown.both.org systemd[1]: Started dhcpd.service - DHCPv4 Server Daemon.
Jun 26 15:57:16 yorktown.both.org dhcpd[1347220]: DHCPREQUEST for 192.168.0.2 from 30:9c:23:e9:a4:e6 via enp0s31f6
Jun 26 15:57:16 yorktown.both.org dhcpd[1347220]: DHCPACK on 192.168.0.2 to 30:9c:23:e9:a4:e6 via enp0s31f6
Jun 26 15:57:17 yorktown.both.org dhcpd[1347220]: DHCPREQUEST for 192.168.0.8 from e8:40:f2:3d:0e:a8 via enp0s31f6
Jun 26 15:57:17 yorktown.both.org dhcpd[1347220]: DHCPACK on 192.168.0.8 to e8:40:f2:3d:0e:a8 via enp0s31f6

Figure 7: Start and verify that the DHCP server started without errors. You can even see a couple of fulfilled requests in this example.

There should be no errors from the status command, but, like my server above, there may be several statements indicating the DHCP daemon is listening on a specific NIC and the MAC address of the NIC. If this information is not correct, verify that the dhcpd.conf file is valid and restart the DHCP server. If there are syntactical errors in the configuration, they will appear in the status report.

I also ran the command shown in Figure 8 on some of my hosts to verify that the network is configured with the correct IP address, router, and DNS servers. This command shows the installed NICs on each host, including the loopback device, lo.

[essex ~]# nmcli
eno1: connected to Wired connection 1
        "Intel 82579V"
        ethernet (e1000e), E0:69:95:45:C4:CD, hw, mtu 1500
        ip4 default
        inet4 192.168.0.6/24
        route4 192.168.0.0/24 metric 100
        route4 default via 192.168.0.254 metric 100
        inet6 fe80::3220:6681:4348:71df/64
        route6 fe80::/64 metric 1024

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 192.168.0.52 8.8.8.8 8.8.4.4
        domains: both.org
        interface: eno1

Figure 8: Verify DHCP provided the correct data to the hosts.

Wrap up

DHCP provides network configuration data to client hosts on a network, allowing for centralized network configuration management. A DHCP server can provide various configuration options to clients, including many required for Windows hosts that might connect to the network. This configuration data includes gateway routers, NTP servers, DNS servers, PXE boot servers, and much more.

I use DHCP for most of my hosts because it is less work in the long run than static configurations on each host. The default setup for NetworkManager on newly installed hosts is to use DHCP.

The Dynamic Host Configuration Protocol (DHCP) provides network configuration data to client hosts on a network, allowing for centralized network configuration management.

Image by:

Opensource.com

Sysadmin Linux Networking What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create a JavaScript API in 6 minutes

Mon, 07/18/2022 - 15:00
Create a JavaScript API in 6 minutes Jessica Cherry Mon, 07/18/2022 - 03:00 Register or Login to like Register or Login to like

This article demonstrates creating a base API with Express and JavaScript. Express is a NodeJS minimalist web framework. This combination allows for minimal effort to get an API up and running at the speed of light. If you have six minutes of free time, you can get this API working to do something useful.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Get started with NodeJS

What you need for this project is the NodeJS version of your choice. In this example, I use NodeJS and HTTPie for testing, a web browser, and a terminal. Once you have those available, you're ready to start. Let's get this show on the road!

Set up a project directory and install the tools to get started:

$ mkdir test-api

The npm init command creates the package JSON for our project below. Type npm init and press enter several times. The output is shown below:

$ npm init

Press ^C at any time to quit.
package name: (test-api)
version: (1.0.0)
description:
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to /Users/cherrybomb/test-api/package.json:

{
  "name": "test-api",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}


Is this OK? (yes)

This utility walks you through creating a package.json file. It only covers the most common items, and tries to guess sensible defaults. See npm help init for definitive documentation on these fields and exactly what they do.

Use npm install {pkg} afterward to install a package and save it as a dependency in the package.json file.

Next, install Express using the npm CLI:

$ npm install express Output shown below

npm WARN cherrybomb No description
npm WARN cherrybomb No repository field.
npm WARN cherrybomb No license field.

+ express@4.18.1
added 60 packages from 39 contributors and audited 136 packages in 4.863s

16 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Finally, create your source directory and your index.js file, which is where the application code lives:

$ mkdir src
$ touch src/index.js

Time to code!

Code an API

For your first act of coding, make a simple "hello world" API call. In your index.js file, add the code snippet below:

const express = require('express')
const app = express()
const port = 5000

app.get('/', (req, res) => {
  res.send('Hello World!')
})

app.listen(port, () => {
  console.log(`Example app listening on port ${port}`)
})

Each of these constant variables is available in the scopes below. Because you're not using the following scopes within the code, these constants are used without too much extra thought.

When you call app.get, you define the GET{rest article needed} endpoint to a forward slash. This also sets the "hello world" response.

Finally, in the last section, you will start your app on port 5000. The output on your terminal shows your defined message in a file called console.log.

To start your application, run the following command, and see the output as shown:

$ test-api → node ./src/index.js
Example app listening on port 5000Test the API

Now that everything is up and running, make a simple call to ensure your API works. For the first test, just open a browser window and navigate to localhost:5000.

Image by:

(Jessica Cherry, CC BY-SA 4.0)

Next, check out what HTTPie says about the API call:

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 12
Content-Type: text/html; charset=utf-8
Date: Tue, 21 Jun 2022 14:31:06 GMT
ETag: W/"c-Lve95gjOVATpfV8EL5X4nxwjKHE"
Keep-Alive: timeout=5
X-Powered-By: Express

Hello World!

And there you have it! One whole working API call. So what's next? Well, you could try some changes to make it more interesting.

Make your API fun

The "hello world" piece is now done, so it's time to do some cool math. You'll do some counts instead of just "hello world."

Change your code to look like this:

const express = require('express')
const app = express()
const port = 5000

let count = 0;

app.get('/api', (req, res) => {
res.json({count})
})

app.post('/api', (req, res) => {
++count;
res.json({count});
});

app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})

Aside from a GET command in your code, you now have a POST to make some changes to your count. With count defined as 0, the LET command allows changes to the COUNT variable.

In app.get, you get the count, and in app.post, you ++count, which counts upwards in increments of 1. When you rerun the GET, you receive the new number.

Try out the changes:

test-api → node ./src/index.js
Example app listening on port 5000

Next, use HTTPie to run the GET and POST operations for a test to confirm it works. Starting with GET, you can grab the count:

test-api → http GET 127.0.0.1:5000/api
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Jun 2022 15:23:06 GMT
ETag: W/"b-ch7MNww9+xUYoTgutbGr6VU0GaU"
Keep-Alive: timeout=5
X-Powered-By: Express

{
    "count": 0
}

Then do a POST a couple of times, and watch the changes:

test-api → http POST 127.0.0.1:5000/api
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Jun 2022 15:28:28 GMT
ETag: W/"b-qA97yBec1rrOyf2eVsYdWwFPOso"
Keep-Alive: timeout=5
X-Powered-By: Express

{
    "count": 1
}


test-api → http POST 127.0.0.1:5000/api
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Jun 2022 15:28:34 GMT
ETag: W/"b-hRuIfkAGnfwKvpTzajm4bAWdKxE"
Keep-Alive: timeout=5
X-Powered-By: Express

{
    "count": 2
}

As you can see, the count goes up! Run one more GET operation and see what the output is:

test-api → http GET 127.0.0.1:5000/api
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 11
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Jun 2022 15:29:41 GMT
ETag: W/"b-hRuIfkAGnfwKvpTzajm4bAWdKxE"
Keep-Alive: timeout=5
X-Powered-By: Express

{
    "count": 2
}The end and the beginning

I specialize in infrastructure and Terraform, so this was a really fun way to learn and build something quickly in a language I'd never used before. JavaScript moves fast, and it can be annoying to see errors that seem obscure or obtuse. I can see where some personal opinions have judged it harshly as a language, but it's a strong and useful tool. I hope you enjoyed this walkthrough and learned something new and cool along the way.

Express yourself by coding a fun API using Express, a NodeJS minimalist web framework.

Image by:

Ray Smith

JavaScript Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Monitor your Linux firewall with nftwatch

Mon, 07/18/2022 - 15:00
Monitor your Linux firewall with nftwatch Kenneth Aaron Mon, 07/18/2022 - 03:00 Register or Login to like Register or Login to like

Netfilter tables (nftables) is the default firewall shipped with modern Linux distros. It's available on Fedora and RHEL 8, the latest Debian, and many others. It replaces the older iptables that was bundled in earlier distro releases. It's a powerful and worthy replacement for iptables, and as someone who uses it extensively, I appreciate its power and functionality.

One of the features of nftables is the ability to add counters to many elements, such as rules. These are enabled on demand. You need to explicitly ask for it on a per line basis using the "counter" argument. I have them enabled for specific rules in my firewall, which gives me visibility into those rules.

This got me thinking. How can I look at these counters in real time? At first I tried "watch" which allows things like refresh rate, but I didn't like the default format and it wasn't scrollable. I found using head and tail and awk less than ideal. A user-friendly solution didn't exist. So I wrote my own, which I'd like to share with the open source community.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Introducing nftwatch on Linux

My solution, which I call nftwatch, does a few things:

  • It reorders and reformats the nftables output to make it more readable.
  • It allows scrolling the output up or down.
  • Its user-defined refresh rate (can be changed in real time).
  • It can pause the display.

Instead of a dump of a table, you get output that shows activity for each rule:

Image by:

(Kenneth Aaron, CC BY-SA 4.0)

You can download it here from its Git repository.

It is 100% python, 100% open source, and 100% free. It ticks all the boxes for free, quality programs.

Install nftwatch on Linux

Here are the manual install instructions:

  1. Clone or download the project from the git repository.
  2. Copy nftwatch.yml to /etc/nftwatch.yml.
  3. Copy nftwatch to /usr/local/bin/nftwatch and grant it executable permissions using chmod a+x.
  4. Use nftwatch with no args to run it.
  5. See nftwatch -m for the man page.

You can also run nftwatch without the YAML config file, in which case it uses builtin defaults.

Usage

The nftwatch command displays nftables rules. Most of the controls are designed for this purpose.

Arrow keys and the equivalent Vim keypresses control scrolling. Use the F or S key to change the refresh speed. Use the P key to pause the display.

Run nftwatch -m for full instructions, and a list of interactive key controls.

A new view of your firewall

Firewalls can seem obtuse and vague even if you spend time to configure them. Aside from extrapolating indicators from log entries, it's hard to tell what kind of activity your firewall is actually seeing. With nftwatch, you can see your firewall at work, and ideally gain a better understanding of the kind of traffic your network has to deal with on a daily basis.

I created the Linux nftwatch command to watch firewall traffic stats.

Image by:

Jonas Leupe on Unsplash

Linux Sysadmin What to read next Watch commands and tasks with the Linux watch command This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages