Subscribe to feed
Updated: 2 hours 33 min ago

Automate Mastodon interactions with Python

Tue, 01/31/2023 - 16:00
Automate Mastodon interactions with Python Moshe Zadka Tue, 01/31/2023 - 03:00

The federated Mastodon social network has gotten very popular lately. It's fun to post on social media, but it's also fun to automate your interactions. There is some documentation of the client-facing API, but it's a bit light on examples. This article aims to help with that.

You should be fairly confident with Python before trying to follow along with this article. If you're not comfortable in Python yet, check out Seth Kenlon's Getting started with Python article and my Program a simple game article.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Create an application

The first step is to go to Preferences in Mastodon and select the Development category. In the Development panel, click on the New Applications button.

After creating an application, copy the access token. Be careful with it. This is your authorization to post to your Mastodon account.

There are a few Python modules that can help.

  • The httpx module is useful, given that it is a web API.
  • The getpass module allows you to paste the token into the session safely.
  • Mastodon uses HTML as its post content, so a nice way to display HTML inline is useful.
  • Communication is all about timing, and the dateutil and zoneinfo modules will help deal with that.

Here's what my typical import list looks like:

import httpx import getpass from IPython.core import display from dateutil import parser import zoneinfo

Paste in the token into the getpass input:


Create the httpx.Client:

client = httpx.Client(headers=dict(Authorization=f"Bearer {token}"))

The verify_credentials method exists to verify that the token works. It's a good test, and it gives you useful metadata about your account:

res = client.get("")

You can query your Mastodon identity:

res.raise_for_status() result = res.json() result["id"], result["username"] >>> ('27639', 'moshez')

Your mileage will vary, but you get your internal ID and username in response. The ID can be useful later.

For now, abstract away the raise_for_status and parse the JSON output:

def parse(result):     result.raise_for_status()     return result.json()

Here's how this can be useful. Now you can check your account data by ID. This is a nice way to cross-check consistency:

result = parse(client.get("")) result["username"] >>> 'moshez'

But the interesting thing, of course, is to get your timeline. Luckily, there's an API for that:

statuses = parse(client.get("")) len(statuses) >>> 20

It's just a count of posts, but that's enough for now. There's no need to deal with paging just yet. The question is, what can you do with a list of your posts? Well, you can query it for all kinds of interesting data. For instance, who posted the fourth status?

some_status = statuses[3] some_status["account"]["username"] >>> 'donwatkins'

Wonderful, a tweet from fellow correspondent Don Watkins! Always great content. I'll check it out:


Just finished installed @fedora #Silverblue 37 on @system76 #DarterPro

"Just" finished? Wait, when was this tweet posted? I live in California, so I want the time in my local zone:

california = zoneinfo.ZoneInfo("US/Pacific") when = parser.isoparse(some_status["created_at"]) print(when.astimezone(california)) >>> 2022-12-29 13:56:56-08:00

Today (at the time of this writing), a little before 2 PM. Talk about timeliness.

Do you want to see the post for yourself? Here's the URL:

some_status["url"] >>> ''

Enjoy tooting, now with 20% more API!

Follow along as I play with the Mastodon API to create a new application.

Image by:

kris krüg

Python Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to use GitOps to automate Terraform

Mon, 01/30/2023 - 16:00
How to use GitOps to automate Terraform robstr Mon, 01/30/2023 - 03:00

GitOps as a workflow is perfect for application delivery, mostly used in Kubernetes environments, but it is also possible to use for infrastructure. In a typical GitOps scenario, you might want to look at solutions like Crossplane as a Kubernetes-native alternative, while most traditional infrastructure are still used with CI/CD pipelines. There are several benefits of creating your deployment platform with Kubernetes as the base, but it also means that more people would have to have that particular skill set. One of the benefits of an Infrastructure-as-Code tool like Terraform is that it is easy to learn, and doesn't require much specialized knowledge.

When my team was building our platform services, we wanted everyone to be able to contribute. Most, if not all, of our engineers use Terraform on a daily basis. They know how to create Terraform modules that can be used in several scenarios and for several customers. While there are several ways of automating Terraform, we would like to utilize a proper GitOps workflow as much as possible.

How does the Terraform controller work

While searching for alternatives for running Terraform using Kubernetes, I found several controllers and operators, but none that I felt had as much potential as the tf-controller from Weaveworks. We are already using Flux as our GitOps tool. The tf-controller works by utilizing some of the core functionality from Flux, and has a custom resource for Terraform deployments. The source controller takes care of fetching our modules, the Kustomize controllers apply the Terraform resources, and then the controller spins up static pods (called runners) that run your Terraform commands.

The Terraform resource looks something like this:

apiVersion: kind: Terraform metadata: name: helloworld namespace: flux-system spec: interval: 1m approvePlan: auto path: ./terraform/module sourceRef: kind: GitRepository name: helloworld namespace: flux-system

There are a few things to note on the specs here. The interval in the spec controls how often the controller starts up the runner pods. This then performs a terraform plan on your root module, which is defined by the path parameter.

This particular resource is set to automatically approve a plan. This means that if there is a difference between the plan and the current state of the target system, a new runner will run to apply the changes automatically. This makes the process as "GitOps" as possible, but you can disable this. If you disable it, you have to manually approve plans. You can do this either by using the Terraform Controller CLI or by updating your manifests with a reference to the commit which should be applied. For more details, see the documentation on manual approval.

The tf-controller utilizes the source controller from Flux. The sourceRef attribute is used to define which source resource you want to use, just like a Flux Kustomization resource would.

Advanced deployments

While the example above works, it's not the type of deployment my team would normally do. When not defining a backend storage, the state would get stored in the cluster, which is fine for testing and development. But for production, I prefer that the state file is stored somewhere outside the cluster. I don't want this defined in the root module directly, as I want to reuse our root modules in several deployments. This means I have to define our backend in our Terraform resource.

Here is an example of how I set up custom backend configurations. You can find all available backends in the Terraform docs:

apiVersion: kind: Terraform metadata: name: helloworld namespace: flux-system spec: backendConfig: customConfiguration: | backend "azurerm" { resource_group_name = "rg-terraform-mgmt" storage_account_name = "stgextfstate" container_name = "tfstate" key = "helloworld.tfstate" } ...

Storing the state file outside the cluster means that I can redeploy our cluster. But then there is no storage dependency. There is no need for backup or state migration. As soon as the new cluster is up, it runs the commands against the same state, and I am back in business.

Another advanced move is dependencies between modules. Sometimes we design deployments like a two-stage rocket, where one deployment sets up certain resources that the next one uses. In these scenarios, we need to make sure that our Terraform is written in such a fashion so that we output any data needed as inputs for the second module, and ensure that the first module has a successful run first.

These two examples are from code used while demonstrating dependencies, and all the code can be found on my GitHub. Some of the code is omitted for brevity's sake:

apiVersion: kind: Terraform metadata: name: shared-resources namespace: flux-system spec: ... writeOutputsToSecret: name: shared-resources-output ... apiVersion: kind: Terraform metadata: name: workload01 namespace: flux-system spec: ... dependsOn: - name: shared-resources ... varsFrom: - kind: Secret name: shared-resources-output ...

In the deployment that I call shared-resources, you see that I defined a secret where the outputs from the deployment should be stored. In this case, the outputs are the following:

output "subnet_id" { value = azurerm_virtual_network.base.subnet.*.id[0] } output "resource_group_name" { value = }

In the workload01 deployment, I first define our dependency with the dependsOn attribute, which makes sure that shared-resources has a successful run before scheduling workload01. The outputs from shared-resources is then used as inputs in workload01, which is the reason why I want to wait.

More on automation Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible eBook: 7 examples of automation on the edge More articles about open source automation Why not pipelines or Terraform Cloud

The most common approach to automating Terraform is either by using CI/CD pipelines or Terraform Cloud. Using pipelines for Terraform works fine, but usually ends up with needing to copy pipeline definitions over and over again. There are solutions to that, but by using the tf-controller you have a much more declarative approach to defining what you want your deployments to look like rather than defining the steps in an imperative fashion.

Terraform Cloud has introduced a lot of features that overlap with using the GitOps workflow, but using the tf-controller does not exclude you from using Terraform Cloud. You could use Terraform Cloud as the backend for your deployment, only automating the runs through the tf-controller.

The reason that my team uses this approach is that we already deploy applications using GitOps, and we have much more flexibility as to how we can offer these capabilities as a service. We can control our implementation through APIs, making self-service more accessible to both our operators and end-users. The details around our platform approach are such a big topic, that we will have to return to those in its own article.

This article was originally published on the author's blog and has been republished with permission.

Instead of using CI/CD pipelines or Terraform Cloud, try this alternative approach to automating Terraform using Flux and GitOps.

Image by:

CI/CD Automation Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Learn to code a simple game in Zig

Mon, 01/30/2023 - 16:00
Learn to code a simple game in Zig Moshe Zadka Mon, 01/30/2023 - 03:00

Writing the same application in multiple languages is a great way to learn new ways to program. Most programming languages have certain things in common, such as:

  • Variables
  • Expressions
  • Statements

These concepts are the basis of most programming languages. Once you understand them, you can take the time you need to figure out the rest.

Furthermore, programming languages usually share some similarities. Once you know one programming language, you can learn the basics of another by recognizing its differences.

A good tool for learning a new language is by practicing with a standard program.
This allows you to focus on the language, not the program's logic. I'm doing that in this article series using a "guess the number" program, in which the computer picks a number between 1 and 100 and asks you to guess it. The program loops until you guess the number correctly.

This program exercises several concepts in programming languages:

  • Variables
  • Input
  • Output
  • Conditional evaluation
  • Loops

It's a great practical experiment to learn a new programming language.

Guess the number in Zig basic

Zig is still in the alpha stage, and subject to change. This article is correct as of zig version 0.11. Install zig by going to the downloads directory and downloading the appropriate version for your operating system and architecture:

const std = @import("std"); fn ask_user() !i64 { const stdin =; const stdout =; var buf: [10]u8 = undefined; try stdout.print("Guess a number between 1 and 100: ", .{}); if (try stdin.readUntilDelimiterOrEof(buf[0..], '\n')) |user_input| { return std.fmt.parseInt(i64, user_input, 10); } else { return error.InvalidParam; } } pub fn main() !void { const stdout =; var prng = std.rand.DefaultPrng.init(blk: { var seed: u64 = undefined; try std.os.getrandom(std.mem.asBytes(&seed)); break :blk seed; }); const value = prng.random().intRangeAtMost(i64, 1, 100); while (true) { const guess = try ask_user(); if (guess == value) { break; } const message = if (guess < value) "low" else "high"; try stdout.print("Too {s}\n", .{message}); } try stdout.print("That's right\n", .{}); }

The first line const std = @import("std"); imports the Zig standard library.
Almost all programs will need it.

The ask_user function in Zig

The function ask_user() returns a 64-bit integer or an error. This is what the ! (exclamation mark) notes. This means if there is an I/O issue or the user enters an invalid input, the function returns an error.

The try operator calls a function and return its value. If it returns an error, it immediately returns from the calling function with an error. This allows explicit, but easy, error propagation. The first two lines in ask_user alias contains some constants from std.
This makes the following I/O code simpler.

This line prints the prompt:

try stdout.print("Guess a number between 1 and 100: ", .{});

It automatically returns a failure if the print fails (for example, writing to a closed terminal).

This line defines the buffer into which user input is read:

var buf: [10]u8 = undefined;

The expression inside the if clause reads user input into the buffer:

(try stdin.readUntilDelimiterOrEof(buf[0..], '\n')) |user_input|

The expression returns the slice of the buffer that was read into. This is assigned to the variable user_input, which is only valid inside the if block.

The function std.fmt.parseInt returns an error if the number cannot be parsed.
This error is propagated to the caller. If no bytes have been read, the function immediately returns an error.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications The main function

The function begins by getting a random number. It uses std.rand.DefaultPrng.

The function initializes the random number generator with std.os.getrandom. It then uses the generator to get a number in the range of 1 to 100.

The while loop continues while true is true, which is forever. The only way out is with the break, which happens when the guess is equal to the random value.

When the guess is not equal, the if statement returns the string low or high depending on the guess. This is interpolated into the message to the user.

Note that main is defined as !void, which means it can also return an error. This allows using the try operator inside main.

Sample output

An example run, after putting the program in main.zig:

$ zig run main.zig Guess a number between 1 and 100: 50 Too high Guess a number between 1 and 100: 25 Too low Guess a number between 1 and 100: 37 Too low Guess a number between 1 and 100: 42 Too high Guess a number between 1 and 100: 40 That's rightSummary

This "guess the number" game is a great introductory program for learning a new programming language because it exercises several common programming concepts in a pretty straightforward way. By implementing this simple game in different programming languages, you can demonstrate some core concepts of the languages and compare their details.

Do you have a favorite programming language? How would you write the "guess the number" game in it? Follow this article series to see examples of other programming languages that might interest you!

Practice programming in Zig by writing a "guess the number" game.

Image by:

Ray Smith

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

4 open source technologies to make writing easier

Sat, 01/28/2023 - 16:00
4 open source technologies to make writing easier Jim Hall Sat, 01/28/2023 - 03:00

I teach university courses on the side, and one of the courses last semester was Writing with Digital Technology, where students learned about different technologies and tools that technical writers use in the industry. Topics included HTML, CSS, XML, DITA, Markdown, GitHub, and other writing tools and technologies.

As I wrapped up last semester, my class and I looked back on the technologies we enjoyed learning. If you are getting started in technical writing, you might be interested in this list of open technologies that make technical writing easier.


Every website is built on HTML, the HyperText Markup Language. While professional technical writers might use web-based tools like Drupal or TYPO3 to create web pages, it's always nice to know how things work behind the scenes by learning HTML. While it may not happen very frequently, sometimes a web-based tool will generate incorrect HTML. Technical writers need to know how to fix web pages by editing the HTML without breaking it further.

HTML code is entirely text-based, with tags inside angle brackets. Elements are either block or inline, such as

to define a block paragraph or to put emphasis (usually italics) on a word or phrase.

Technical writers might focus on writing content in HTML and defining styles in a separate CSS file or stylesheet to define how the content appears on the screen. This separation of content and appearance is a great way to focus on writing.


Another way that you can write documentation is with Markdown. Markdown aims to streamline technical writing by removing as much markup syntax as possible, replacing it with standard conventions that you might use when writing in a plain text file.

For example, to start a new paragraph in Markdown, add a blank line in your text file. The next paragraph starts with the next block of text. Add headings by drawing a line under it, such as this to create a top level heading:

Title of my document ====================

And this to create a subheading in a document:

How to use the software -----------------------

Markdown is often used when writing Readme files or other project documentation on GitHub or GitLab. This makes Markdown a popular choice for open source developers as well as technical writers.


Darwin Information Typing Architecture (DITA) is essentially an XML file with a particular file structure. When creating project documentation with DITA, technical writers focus on how to reuse and remix content to create new kinds of output files.

For example, three common DITA file formats are the DITA Concept which describes a thing or a process, DITA Task which lists the steps to perform a process, and DITA Reference which provides just the facts about a topic, such as warnings or important notes.

DITA is a power tool for technical writers because you can assemble a document by creating a separate XML file called the DITAMap that compiles several DITA files about a topic. This allows technical writers to reuse content without copying and pasting between separate documents. DITA Open Toolkit and other DITA tools provide transformations that turn the DITA source into different output types including PDF documents, HTML websites, and EPUB books.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles LibreOffice

If you prefer to use a traditional word processor to write documentation, LibreOffice Writer provides an outstanding open source option. Writers can leverage different styles available in LibreOffice to define chapter titles, section headings, paragraphs, and sample code within a document. LibreOffice also supports character styles that help provide emphasis or highlight source code keywords and other inline text.

The page styles in LibreOffice allow great flexibility in creating printed documentation. For example, page styles include left and right pages, typically used in longer documents to ensure that new chapters or major sections always begin on the right-hand page of a printed book. Headers and footers can be defined independently on left and right pages, providing greater flexibility in technical writing.

LibreOffice is a more traditional desktop word processor with an easy-to-learn interface. Most functionality is available directly on the toolbar, with additional features in menus. Or use the pop-out Styles pane to quickly select the paragraph, character, or page style that you want to use.

This article was co-authored by: Teagan Nguyen, Joshua Hebeisen, Aurora Dolce, David Kjeldahl, and Rose Lam.

If you are getting started in technical writing, you might be interested in this list of open technologies that make technical writing easier.

Image by: CC0.

Documentation LibreOffice What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A beginner's guide to Mastodon

Fri, 01/27/2023 - 16:00
A beginner's guide to Mastodon cherrybomb Fri, 01/27/2023 - 03:00

Since my previous article about Mastodon, I noticed a lot of people joined but absolutely no one understood the mechanics of getting their feeds together or why they couldn't see what they wanted. Now that you're on Mastodon, you need to know how to use Mastodon. I'm going to cover the mechanics of how to see what you want, and how to configure your feed.

Why is my Mastodon feed empty?

Well, that's a good question. Most social media platforms use algorithms to decide what to show you. You sign in, they might ask you what interests you have, and boom — you get a feed that you have to cherry-pick through to get what you want out of it. Once you're done filtering your way through everything, the fancy back-end code adds more, or shows less based on your interactions, watch time, and length of non-scrolling. Personally, that's kind of creepy to do. But hey, it works.

Mastodon doesn't use those algorithms. This is on purpose. So I'm going to cover a couple of ways to set your feed up.

Following people on Mastodon

The obvious way to fill up the feed you have is to start following people, so their posts appear. This, of course, allows you to find more people to follow.

You can always start by finding some friends or some well-known folks that happen to have their Mastodon IDs out in public. For example, my ID is directly attached to my profile. I started by following my fellow correspondent Don, then found more people on his follow list for more interesting people that have similar interests as myself. Below, I explain the search function box.

The search box is just above your user avatar on your home page.

Image by:

(Jess Cherry, CC BY-SA 4.0)

When you click on it, a nifty little pop-up shows up, providing you with some example searches.

Image by:

(Jess Cherry, CC BY-SA 4.0)

In my example, I entered "opensource" and it showed there are hashtags and people to follow. You can even click load more if you want to get a larger list.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Now that you've gotten some people in your feed, you can start working on personalizing more by using hashtags.


Hashtags are keywords or phrases preceded by the # symbol. If you're used to databases this is a select statement, similar to this SQL:

select * from Subjects where hashtags='specific_thing_i_like';

You can use hashtags to discover and follow users who are interested in similar topics as you are. For example, if you are interested in fashion like I am (we all have our guilty pleasures), you could search for the hashtag #fashion. You can then follow users who frequently use that hashtag in their posts. You can also use the same search field to find other users to follow. Here's a snippet of what appeared in that hashtag list.

Image by:

(Jess Cherry, CC BY-SA 4.0)

As you can see the hashtag itself had 58 people using that tag in the last two days. On the right side, you can see that there are graphs demonstrating the trend those hashtags followed within the recent past.

When I click on the hashtag, I see a list of people using it, and then I click on the users I want to follow. (I'm not including a screenshot, because I don't want to include personal tags in this article without users' permission).

Mastodon lists

On the right side of the Mastodon web interface, there are really cool little buttons (Home, Notifications, Explore, Local, and so on).

Image by:

(Jess Cherry, CC BY-SA 4.0)

That button on the bottom is Lists. A list is an organizational tool. You can use lists to organize the accounts you follow into different categories. This means you can manage and adjust your feed to focus on specific people or topics. This functionality is very useful, and it's better than having someone else determine or track what you're into.The best thing is only you have the ability to add people to your lists.

To create a list, click the Lists button.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Create a new list name and click Add List. Of course, I'm going to create a list called opensource.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Next, click on the list name.

Click the Edit list button to open a window that allows you to search among the people you are following. You can add anyone to your list.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Click the plus (+) button to the right of the person you want to add.

Then you hit enter, and poof you get a new stream! Now any time someone in a list publishes a post, that post is added to your list. I just added Linux and pulled up Linux Magazine and my list populated.

Finally, you must choose who gets to see responses you make to list posts. By default, all members of the list are able to view your response. But you can set it so that all of your followers are able to see your response, or that no one can see your response.

To set this, click the Settings icon in the top right of the list.

Image by:

(Jess Cherry, CC BY-SA 4.0)

A whole bunch of nope

Now I've reached what I call the "nope buttons." These buttons are the mute and block buttons. They allow you to remove content, people, or hashtags from your feed.

Why would you get bad stuff in your feed after doing all of the work you just went through to curate it? Lots of reasons. You could follow someone based on a few interesting posts they've made, only to find out later that they're not as interesting as you'd hoped. A hashtag could get appropriated to mean something different than what it meant when you followed it. It's the internet and things change quickly.

Muting allows you to hide a specific user's posts or a particular hashtag from your timeline. Blocking prevents a user from interacting with you or seeing your posts.

Here's an example of how to use these, but hopefully you won't have to.

Image by:

(Jess Cherry, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

There are several actions you can take against an account.

  • Mute: You don't see anything this user posts.
  • Block: You can't see them and they can't see you.
  • Report: They're breaking the rules, so you report them to their instance admin.
  • Block the domain: You can block the entire domain their account resides on.

Domain blocking is the most drastic action. There are some dark, scary, mean, rude, and just outright horrible domains out on the internet. Some of them may host a Mastodon instance. When you block an entire domain, nobody on that instance sees your posts, and you don't see posts from anyone using that server. This is useful when a group of people post stuff you're not interested in seeing. You don't have to block each person individually. You can just block their entire server. When lots of other Mastodon instances also block a domain, that instance is de-federated. This means that the domain is not connected to any other Mastodon instance, and is no longer part of a larger community. The users on that instance can still talk to one another, but that's all they can do.

This is a significant advantage to federated social media. And it's so appreciated by its users that there are a few groups of people maintaining a list of domains you can and should avoid. The listing also provides a reason that the domain is on the list.

You can find blocklists on GitLab and GitHub, and also on the Fediblock wiki page. It's always worth making sure to avoid malware, and generally horrible things, so the wiki page is useful.

Final notes

As with anything new, when you join Mastodon, there's a lot of exploring, reading instructions, and just trying to figure stuff out. In this case, there is a little work you have to do to curate your personalized feed. If you're up for doing the work, this little set of directions should be useful to you. Hope you enjoy your time floating in the Fediverse!

You made the switch to Mastodon. Congratulations! Here's what you should do next.

Image by:

Alternatives Tools What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Packaging Python modules with wheels

Fri, 01/27/2023 - 16:00
Packaging Python modules with wheels hANSIc99 Fri, 01/27/2023 - 03:00

Everyone who has been working with Python for a while might already have come around packages. In Python terminology, packages (or distribution packages) are collections of one or more Python modules that provide specific functionality. The general concept is comparable to libraries in other languages. Some peculiarities with Python packages make dealing with them different.

Pip and PyPi

The most common way to install a third-party Python package is to use package installer pip, supplied by default. The Python Package Index (PyPi) is the central server for packages of all kinds and the default source for pip. Python packages contain files that specify the package name, version, and other meta information. Based on those files, PyPi knows how to classify and index a package. In addition, those files may include installation instructions that pip processes.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Source and binary distribution

Python modules are distributed in several formats, each with pros and cons. In general, the formats can be divided into two groups.

Source distribution (sdist)

Source distributions are defined in PEP 517 and are gzipped tar archives with the file ending *.tar.gz. The archive contains all package-related source files and installation instructions. A source distribution often has dependencies to a build system like distutils or setuptools which cause code execution during installation. The execution of (arbitrary) code upon installation may raise safety concerns.

In the case of a Python C/C++ extension, a source distribution contains plain C/C++ files. These must be compiled upon installation, so an appropriate C/C++ toolchain must be present.

Built distributions (bdist)

In contrast, you can often use a built distribution as is. The idea behind built distributions is to provide a package format without introducing additional dependencies. When it comes to Python C/C++ extension, a built distribution provides binaries ready for the user's platform.

The most widely used built distribution format is the Python wheel, specified in PEP 427.

Python wheels

Wheels are ZIP archives with the file ending .whl. A wheel may contain binaries, scripts, or plain Python files. If a wheel contains binaries of a C/C++ extension module, it indicates that by including its target platform in its filename. Pure Python files (.py) are compiled into Python byte code (.pyc) during the installation of the wheel.

If you attempt to install a package from PyPi using pip, it always chooses a Python wheel over a source distribution. However, when pip cannot find a compatible wheel, it attempts to fetch the source distribution instead. As a package maintainer, it's a good practice to provide both formats on pip. For a package user, using wheels over source distributions is advantageous because of the safer installation process, their smaller size, and, as a result, faster installation time.

To address a wide range of users, the package maintainer must offer wheels for various platforms and Python versions.

In one of my previous articles, Write a C++ extension module for Python, I demonstrated how to create a Python C++ extension for the CPython interpreter. You can re-use the article's example code to build your first wheel.

Defining the build configuration with setuptools

The demo repository contains the following files, which contain meta information and a description of the build process:

pyproject.toml [build-system] requires = [ "setuptools>=58" ] build-backend = "setuptools.build_meta"

This file is the successor of the since PEP 517 and PEP 518. This file is actually the entry point for the packaging process. The build-backend key tells pip to use setuptools as the build system.


This file contains the static, never changing metadata of the package:

[metadata] name = MyModule version = 0.0.1 description = Example C/C++ extension module long_description = Does nothing except incremention a number license = GPLv3 classifiers = Operating System::Microsoft Operating System::POSIX::Linux Programming

This file defines the generic build process for the Python module. Every action which must be performed at installation time goes here.

Due to security concerns, this file should only be present if absolutely necessary.

from setuptools import setup, Extension MyModule = Extension( 'MyModule', sources = ['my_py_module.cpp', 'my_class_py_type.cpp'], extra_compile_args=['-std=c++17'] ) setup(ext_modules = [MyModule])

This example package is actually a Python C/C++ extension, so it requires a C/C++ toolchain on the user's system to compile. In the previous article, I used CMake to generate the build configuration. This time, I'm using setuptools for the build process. I faced challenges when running CMake inside a build container (I'll come back to that point later). The file contains all the information required to build the extension module.

In this example, lists the involved source files and some (optional) compile arguments. You can find a reference to the setuptools build in the documentation.

Build process

To start the build process, open a terminal in the root folder of the repository and run:

$ python3 -m build --wheel

Afterward, find the subfolder dist containing a .whl file. For example:


The file name carries a lot of information. After the module name and version, it specifies the Python interpreter (CPython 3.9) and the target architecture (x86_64).

At this point, you can install and test the newly created wheel:

$ python3 -m venv venv_test_wheel/ $ source venv_test_wheel/bin/activate $ python3 -m pip install dist/MyModule-0.0.1-cp39-cp39-linux_x86_64.whl Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

Now you have one wheel, which you can forward to someone using the same interpreter on the same architecture. This is the bare minimum, so I'll go one step further and show you how to create wheels for other platforms.

Build configuration

As a package maintainer, you should provide a suitable wheel for as many platforms as possible. Luckily, there are tools to make this easy for you.

Maintaining Linux compatibility

When building Python C/C++ extensions, the resulting binaries are linked against the standard libraries of the build system. This could cause some incompatibilities on Linux, with its various versions of glibc. A Python C/C++ extension module built on one Linux system may not work on another comparable Linux system due to, for example, the lack of a certain shared library. To avert such scenarios, PEP 513 proposed a tag for wheels that work on many Linux platforms: manylinux.

Building for the manylinux platform causes linking against a defined kernel and userspace ABI. Modules that conform to this standard are expected to work on many Linux systems. The manylinux tag developed over time, and in its latest standard (PEP 600), it directly names the glibc versions the module was linked against (manylinux_2_17_x86_64, for example).

In addition to manylinux, there is the musllinux platform (PEP 656), which defines a build configuration for distributions utilizing musl libc like Alpine Linux.

CI build wheel

The cibuildwheel project provides CI build configurations for many platforms and the most widely used CI/CD systems.

Many Git hosting platforms have CI/CD features built in. The project is hosted on GitHub, so you can use GitHub Actions as a CI server. Just follow the instructions for GitHub Actions and provide a workflow file in your repository: .github/workflows/build_wheels.yml.

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

A push to GitHub triggers the workflow. After the workflow has finished (note that it took over 15 minutes to complete), you can download an archive containing a wheel for various platforms:

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

You still have to package those wheels manually if you want to publish them on PyPi. Using CI/CD, it's possible to automate the delivery process to PyPi. You can find further instructions in cibuildwheels documentation.

Wrap up

The various formats can make the packaging of Python modules an obtuse process for beginners. Knowledge about the different package formats, their purpose, and the tools involved in the packaging process is necessary for package maintainers. I hope this article sheds light on the world of Python packaging. In the end, by using a CI/CD build system, providing packages in the advantageous wheel format becomes a breeze.

By using a CI/CD build system, providing Python packages in the advantageous wheel format becomes a breeze.

Image by:

WOCinTech Chat. Modified by CC BY-SA 4.0

Python CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to add margin notes to a LibreOffice document

Thu, 01/26/2023 - 16:00
How to add margin notes to a LibreOffice document Jim Hall Thu, 01/26/2023 - 03:00

I use LibreOffice Writer on Linux to write my documentation, including client proposals, training materials, and books. Sometimes when I work on a very technical document, I might need to add a margin note to a document, to provide extra context or to make some other note about the text.

LibreOffice Writer doesn't have a "margin note" feature but instead implements margin notes as frames. Here is how I add margin notes as frames in a LibreOffice document.

Set page margins accordingly

If you want to use margin notes in a document, you'll need a wider margin than the standard 1-inch on the left and right sides of the page. This will accommodate placing the frame completely in the margin, so it becomes a true margin note. In my documents, I don't want the margin to become too wide, so I usually increase the left and right page margins to something like 1.25" when I need to use margin notes.

You can set the page margin by using the Styles pop-out selection box. If you don't see the Styles selection box, you can activate it by using ViewStyle in the menus. The default keyboard shortcut for this is F11.

Image by:

(Jim Hall, CC BY-SA 4.0)

Select the Page style, and right-click on the Default Page Style entry to modify it.

Image by:

(Jim Hall, CC BY-SA 4.0)

You can use this dialog box to change the page size and the margins. Where you might normally set a 1-inch margin on the left, right, top, and bottom for your documents, instead set the margins to 1.25" on the left and right, and 1-inch on the top and bottom. The extra space on the left and right will provide a little extra room for the page margins without adding too much blank space.

Image by:

(Jim Hall, CC BY-SA 4.0)

When I'm writing a print edition of a book, I need to prepare the document for double-sided printing, with Left and Right pages. That means my materials need to use the Left Page and Right Page styles for the body pages, and the First Page style for the cover page. So after I modify the document's default page style, I need to do the same for the First Page, Left Page, and Right Page styles. The first time you edit these page styles, they will inherit the page size from the default, so you only need to modify the page margins to your preferred style. For documents with Left and Right pages, you might instead provide the 1.25" margin only on the "outside" margins: the right side for Right pages, and the left side for Left pages.

Add a note in a frame

Wherever you need to add a margin note, insert a frame using the InsertFrameFrame… menu action. This brings up a dialog box where you can edit the settings of the frame.

Image by:

(Jim Hall, CC BY-SA 4.0)

Set the width of the frame to the width of the outside page margin, which you earlier set to 1.25". Click the box for Mirror on even pages then set the horizontal position to Outside for the Entire page. Change the anchor to the character, which will lock the margin note to where it appears in the document, then set the vertical position to Top for the Character, which will start the margin note on the same line as where you insert the frame. I also select Keep inside text boundaries so my frame stays within the printed page.

Image by:

(Jim Hall, CC BY-SA 4.0)

Under the Wrap tab, set the spacing to zero on the left, right, top, and bottom.

Image by:

(Jim Hall, CC BY-SA 4.0)

Finally, select the Borders tab and remove the border by clicking the No Borders preset. Unselect the Synchronize check box, and set the left and right padding to .25", and the top and bottom padding to zero. The left and right padding is important. This provides some extra white space around your margin note so it isn't jammed right up against the text body, and it isn't too close to the page edge.

[NOTE: Missing image for frame borders]

Click Ok to add the empty frame to your document, anchored to the character position where you inserted the frame.

Image by:

(Jim Hall, CC BY-SA 4.0)

The frame padding changes the effective width of the margin note. In this example, you set the outside page margin and frame width to 1.25" and .25" for left and right padding within the frame. That leaves .75" for the margin note itself. This narrow space could be too tight for very long margin notes, depending on what text you need to write here. If you find you need more room for your margin notes, you might instead set the outside page margin and frame width to 1.5" which gives 1-inch margin notes.

Once you've added the frame, you can click inside the frame to type your margin note.

Image by:

(Jim Hall, CC BY-SA 4.0)

Change the look with Styles

When you use frames to add margin notes, LibreOffice uses the Frame Contents paragraph style for your text. You can modify this style to change how margin notes are displayed.

By default, the Frame Contents style uses the same font, font style, and font size as your Default Paragraph Style. To change just the characteristics of the margin note, right-click on the Frame Contents style in the Style Selection selection box, and click Modify. This brings up a new dialog box where you can change the style of the margin notes.

Image by:

(Jim Hall, CC BY-SA 4.0)

In my printed documents, I use Crimson Pro at 12-point size as my default paragraph font. For my margin notes, I prefer to reduce the font size to 10-point and change the style to italics. This makes the margin note stand apart from the rest of the text, yet remains easy to read in print.

Image by:

(Jim Hall, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Copy and paste to create new frames

Inserting a frame as a margin note involves several steps, and it can be a pain to repeat each of them whenever I need to create a new margin note. I save a few steps by copying an existing margin note frame, and pasting it into my document at a new position. Updating the margin note requires selecting the text within the frame and typing a new note.

Because you set up the margin note to appear on the outside of the page area, the copied and pasted margin note will correctly appear in the left margin on Left pages, and in the right margin on Right pages.

Adding margin notes in a document is easy with LibreOffice. With margin notes, you can provide notes for your reader, such as extra context, errata, or pointers to other material. Used sparingly, margin notes can be a welcome addition to your documents.

With margin notes, you can provide notes for your reader, such as extra context, errata, or pointers to other material.

Image by:

LibreOffice What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Celebrating the 2023 Community Choice authors

Wed, 01/25/2023 - 16:00
Celebrating the 2023 Community Choice authors AmyJune Wed, 01/25/2023 - 03:00

Often our first interaction with open source is through community knowledge bases.

This past year, I have had the fantastic opportunity to work with the many authors here at (and bring in some new ones!). I am fortunate enough to meet with our Correspondents program authors weekly and see some authors at in-person and virtual conferences. We are diverse in our knowledge, locations, backgrounds, and uniquely lived experiences.

Each January, we celebrate the community of authors at So, (drumroll please), I’m pleased to present the People's Choice Award winners for 2022!

The methodology

The community votes with page views and engagement.

This year, the editorial team considered the top 12 categories by readership to find the most notable non-staff authors for each category.

Discover more about the authors and their many contributions by selecting their avatar.

2023 People's Choice Award for Linux Alison Chaiken Anamika David Both Tom Oliver Jim Hall Jonathan Garrido Kenneth Aaron Paul Sahana Sreeram Tomasz Waraksa 2023 People's Choice Award for Programming Jessica Cherry Alex Borsody Chris Hermansen Hunter Coleman Jayashree Huttanagoudar Joël Krähemann Martin Kopec Richard Conn Howard Fosdick Sergio Mijatovic 2023 People's Choice Award for Accessibility Amar Gandhi Ayush Sharma Blake Bertuccelli Chris Emezue Peter Cheer Phil Shapiro Rikard Grossman-Nielsen Sachin Samal Vojtech Polasek 2023 People's Choice Award for Rust Gaurav Kamathe Marty Kalin Nitish Tiwari 2023 People's Choice Award for Education Anita Ihuman Candace Sheremeta Don Watkins Gordon Haff Joshua Pearce Sara Cemazar Stefan Miklosovic 2023 People's Choice Award for Git Agil Antony Alan Formy-Duval Benny Ifeanyi Iheagwara Dwayne McDaniel Evan Slatis Jonathan Daggerhart Manaswini Das Vaishnavi R 2023 People's Choice Award for Alternatives Ben Slater Heike Jurzik Laryn Kragt Bakker Manuela Massochin Michael Korotaev Tim Erickson 2023 People's Choice Award for Kubernetes / Containers Bobby Gryzynger Camilla Conte Daniel Oh Kendall Nelson Michael Anello Nived Velayudhan Noaa Barki Randy Fay Will Kelly Yasu Katsuno 2023 People's Choice Award for Python Alena Gerasimova Andy Oram Fridolin Pokorny Mark Meyer Moshe Zadka Sofiia Tarhonska Stephan Avenwedde 2023 People's Choice Award for Raspberry Pi Brian McCafferty DJ Billings Giuseppe Cassibba Martin Anderson-Clutz Peter Czanik 2023 People's Choice Award for Sustainability Alberto Corsín Jiménez Christopher Snider Hannah Smith Ron McFarland Tobias Augspurger Tom Greenwood 2023 People's Choice Award for Automation Antoine Craske Florian Robert Kevin Sonney Maximilian Kolb Nicolas Mengin Rom Adams Servesha Dudhgaonkar Sumantro Mukherjee 2023 People's Choice Award for Career / Business Ben Rometsch Jesse White Josh Salomon Kelsea Zhang Mike Bursell Shebuel Inyang Suzanne Dergacheva Thabang Mashologu Trishna Victor Coisne 2023 People's Choice Award for Community Management Angie Byron Ben Cotton Bolaji Ayodeji John E. Picozzi Paloma Oliveira Ray Paik Rich Bowen Rizel Scarlett Ruth Cheesley Sean P. Goggins 2023 People's Choice Honorable Mentions Dave Stokes Joshua Allen Holm Josip Almasi Liv Erickson Lorna Mitchell Toni Freger


Thank you to all contributors!

We want to thank all of the community who shared articles, made comments, liked posts, and helped us curate our awards. 

Our authors donate their time and knowledge to the open source community as a whole. For more authors and great articles, check out a few of our other favorite categories:

On the 13th anniversary of, we are grateful to honor our community of authors.

Image by: community What to read next Congratulations to the 2022 Community Award recipients Top 50 authors: Community Awards 2021 This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Count magical bunnies with LibreOffice Calc

Wed, 01/25/2023 - 16:00
Count magical bunnies with LibreOffice Calc Jim Hall Wed, 01/25/2023 - 03:00

I love working with spreadsheets, and my favorite spreadsheet application is LibreOffice Calc. A spreadsheet is a grid of cells where each column is represented by letters and rows are numbered. You can perform all kinds of calculations using a spreadsheet. If you can perform a calculation based on other values, you can do that in a spreadsheet.

Here I illustrate how to use the LibreOffice Calc spreadsheet to perform a particular calculation called the Fibonacci Sequence. Fibonacci Sequence numbers pop up everywhere in mathematics and the sciences and are often used to model a simple population growth.

The magic bunny

Imagine a baby bunny who has moved into a new forest home. The forest is empty of all other bunnies; the bunny is alone. But this is a magic bunny — it is born pregnant, and all its children will also be born pregnant. Rabbits breed quickly, but especially so for this breed of magic bunny, which produces a new generation every year.

Let's call the year before the bunny arrived "year zero" or "iteration zero," when you had zero bunnies. A year later, you are at "year one" or "iteration one," with our first bunny.

The population of our magic bunny grows in this way: A baby bunny grows into an adult bunny after one year. An adult bunny will remain into the next generation but will produce another baby bunny. In other words, the rules for counting the bunny population are:

  • baby bunny (b) → adult bunny (A)

  • adult bunny (A) → adult bunny (A) plus a baby bunny (b)

Over time, the bunny population grows like this:

Iteration Population Count 0 - 0 1 b 1 2 A 1 3 Ab 2 4 AbA 3

As you can see, the bunny population grows very quickly. The forest will quickly be filled with magic bunnies.

Counting bunnies in a spreadsheet

How many bunnies will there be after five, ten, or 20 years? Looking at each iteration, the number of bunnies in any year is the sum of the previous two years. Fibonacci described this growth using this definition:

Fib(n) = Fib(n-1) + Fib(n-2) and: Fib(0) = 0 Fib(1) = 1

You can calculate this using LibreOffice Calc! Here's how.

Start with an empty spreadsheet and enter the first two iterations: 0 and 1. Label these with a column header called "n." To enter this into LibreOffice Calc, type "n" into cell A1, the value 0 in cell A2, and 1 in cell A3:

Image by:

(Jim Hall, CC BY-SA 4.0)

Enter the first two iterations of the magic bunny. In year zero, there were zero bunnies. In year one, there was one bunny. You can write that mathematically as Fib(0) = 0 and Fib(1) = 1. You can label this with a column header called "Fib(n)." Enter this into LibreOffice Calc by typing "Fib(n)" into cell B1, the number 0 in cell B2, and 1 in cell B3:

Image by:

(Jim Hall, CC BY-SA 4.0)

To calculate the bunny population in the next iteration, use a spreadsheet formula to calculate Fib(n) for year two. Since the count in any year is the sum of the counts of the previous two years, you can enter into cell B4 the spreadsheet formula =B3+B2. This is the sum of B3 and B2 using addition. LibreOffice Calc will perform the calculation and enter the final result into cell B4:

Image by:

(Jim Hall, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Use AutoFill to calculate future generations

Having entered the first few "n" and "Fib(n)" values, and the calculation for the next iteration, you can let LibreOffice Calc do the rest of the calculations. Notice that when you click on each cell in the spreadsheet, the cell outline has a small box in the lower-right corner. You can grab this box with your mouse and "stretch" the box to fill other cells in the spreadsheet.

When you stretch a cell to fill other cells, LibreOffice Calc uses a feature called "AutoFill" to enter values into the new cells. If you stretch a cell with a single value, AutoFill will iterate the number by one until it reaches the end of the series. For example, you can stretch cell A3 to fill other cells below it, which will fill the range with 2, 3, 4, and so on:

Image by:

(Jim Hall, CC BY-SA 4.0)

If you stretch a cell that has a calculation in it, LibreOffice Calc will try to extend the calculation for you. For example, if you stretch your calculation in B4 into cell B5, the new B5 will contain the formula =B4+B3. In other words, AutoFill translates the calculation. Each successive Fib(n) calculation will be the sum of the two cells above it:

Image by:

(Jim Hall, CC BY-SA 4.0)

You can continue to stretch the cells down, and AutoFill will continue the calculations:

Image by:

(Jim Hall, CC BY-SA 4.0)

So you learn that after 20 iterations, you will have 6,765 magic bunnies. That's a lot of bunnies! Over 6,000 bunnies in 20 years represent a very fast population growth, but it demonstrates how quickly the Fibonacci Sequence can add up to a forest full of magic bunnies. The calculation is relatively simple using LibreOffice Calc.

Follow along with this LibreOffice Calc tutorial to perform the Fibonacci Sequence calculation.

Image by:

Flickr, by Anssi Koskinen CC BY 2.0

LibreOffice Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why sysadmins should license their code for open source

Wed, 01/25/2023 - 16:00
Why sysadmins should license their code for open source dboth Wed, 01/25/2023 - 03:00

As a Linux system administrator, I write a fair amount of code.

Does that surprise you? Well, I strive to be the "lazy" sysadmin, and I do this, in part, by reducing the number of repetitive tasks I need to do by automating them. Most of my automation started as little command-line programs. I store those in executable scripts for re-use at any time on any Linux host for which I have responsibility.

The problem with sharing unlicensed Code

I also like to share my code. I think that, if my code has helped me to resolve a problem, it can also help other sysadmins resolve the same or similar problems. This is, after all, the essence of open source — the sharing of code. While code sharing is a good thing, legal issues can prevent perfectly good code from being used as intended by the developer.

The primary problem is that many companies have legal departments that require them to keep copies of licenses. This makes it easy to understand their rights and obligations. Software without a license attached to it in any way becomes a legal liability. This is because there is no basis upon which to determine whether the code can be used legally or not. This can prevent perfectly good code from being used by many companies and individuals.

Why I license my code

One of the best ways I know to give back to the open source community that provides everyone with incredible programs like the GNU Utilities, the Linux kernel, LibreOffice, WordPress, and thousands more, is to make programs and scripts open source using an appropriate license.

Just because you write a program, believe in open source, and agree that programs should be open source code, does not make it open source. As a sysadmin, I do write a lot of code, but how many of you ever consider licensing your own code?

You have to make the choice to explicitly state that your code is open source, and decide which license you want it to be distributed under. Without this critical step, the code you create is functionally proprietary. This means the community cannot safely take advantage of your work.

You should include the GPLv2 (or your other preferred) license header statement as a command-line option that prints the license header to the terminal. When distributing code, I also recommend that you include a text copy of the entire license along with the code (it's a requirement of some licenses.)

A few years ago, I read an interesting article, The source code is the license that helps to explain the reasoning behind this.

I find it very interesting that in all of the books I have read, and all of the classes I have attended, not once did any of them tell me to include a license for the code I wrote in my tasks as a sysadmin. All of these sources completely ignored the fact that sysadmins write code too. Even in the conference sessions on licensing I've attended, the focus was on application code, kernel code, and even GNU-type utilities. None of the presentations so much as hinted that you should consider licensing it in any way.

Perhaps you've had a different experience, but this has been mine. At the very least, this frustrates me — at the most it angers me. You devalue your code when you neglect to license it. Many sysadmins don't even think about licensing, but it's important if you want your code to be available to the entire community. This is neither about credit or money. This is about ensuring that your code is, now and always, available to others in the best sense of "free and open source."

Eric Raymond, author of the 2003 book, The Art of Unix Programming writes that in the early days of computer programming and especially in the early life of Unix, sharing code was a way of life. In the beginning, this was simply reusing existing code. With the advent of Linux and open source licensing, this became much easier. It meets the needs of system administrators to be able to legally share and reuse open source code.

Raymond states, "Software developers want their code to be transparent. Furthermore they don't want to lose their toolkits and their expertise when they change jobs. They get tired of being victims, fed up with being frustrated by blunt tools and intellectual-property fences and having to repeatedly reinvent the wheel.” This statement also applies to sysadmins — who are also, in fact, software developers.

How I license my code

I mentioned adding an option to print the GPL (or other) license header as a command line option. The code below is a procedure that does so:

############################################# # Print the GPL license header # ############################################# gpl() { echo echo "############################################################" echo "# Copyright (C) 2023 David Both #" echo "# #" echo "# #" echo "# This program is free software; you can redistribute it #" echo "# and/or modify it under the terms of the #" echo "# GNU General Public License as published by the #" echo "# Free Software Foundation; either version 2 of the #" echo "# License, or (at your option) any later version. #" echo "# #" echo "# This program is distributed in the hope that it will be #" echo "# useful, but WITHOUT ANY WARRANTY; without even the #" echo "# implied warranty of MERCHANTABILITY or FITNESS FOR A #" echo "# PARTICULAR PURPOSE. See the GNU General Public License #" echo "# for more details. #" echo "# #" echo "# You should have received a copy of the GNU General #" echo "# Public License along with this program; if not, write #" echo "# to the Free Software Foundation, Inc., 59 Temple Place, #" echo "# Ste 330, Boston, MA 02111-1307 USA #" echo "############################################################" echo } # End of gpl() end

That's the license, included as a function. You can add an option to the code. I like to place the new case sections in alphabetical order to make them a bit easier to find when performing maintenance. This is the GPL, so I chose g as the short option:

######################################### # Process the input options # ######################################### # Get the options while getopts ":ghc" option; do case $option in c) # Check option Check=1;; g) # display the GPL header gpl exit;; h) # Help function Help exit;; \?) # incorrect option echo "Error: Invalid option." exit;; esac done end

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles

These two bits of code are all that's needed to add a legitimate and enforceable license to your program.

Final thoughts

I always license all of my code. I recently presented a session on Bash at OLF (Open Libre Free, which used to be known as Ohio Linux Fest). Instead of using LibreOffice Impress for my presentation, I used a Bash program for the entire thing. After all, it is a presentation about Bash so why not make it a Bash program.

I did include my license code in that Bash program. That way everyone who encounters a copy of my program knows that it's properly licensed under GPLv3, and that they can use and modify it under the terms of that license. My code is the license.

One of the best ways I know to give back to the open source community that provides everyone with incredible programs is to make programs and scripts open source using an appropriate license.

Image by:

Sysadmin Licensing What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Update your edge devices with this open source bootloader

Tue, 01/24/2023 - 16:00
Update your edge devices with this open source bootloader nicolas_rabault_luos Tue, 01/24/2023 - 03:00

Making updates to edge and embedded systems has historically been a painful process. Often, this involves working with multiple microcontrollers from different brands with different capabilities. Usually, each has its own custom bootloader, so it can be updated each board, one by one, with a specific firmware.

Another common issue is updating the system without physical access to the board.

Luos has developed an open source generic bootloader that addresses these issues by updating all the boards of your system through one connection to your device and without requiring physical access to the other boards. It can work for every microcontroller unit (MCU) covered by the Luos library. It allows for flexibility and adaptability in edge and embedded systems development, making managing and updating distributed systems easier. This article explains how the Luos bootloader works.

Advantages of open source on edge and embedded devices

Updating edge and embedded systems can be challenging, especially when it comes to maintaining stability and reliability. These systems are often used in critical applications, and any disruption to their functioning can have serious consequences.

Open source projects are released under a license that allows anyone to use, modify, and distribute the software freely. This collaborative approach to software development has several advantages, particularly when it comes to updating edge and embedded systems.

One of the main benefits of using open source projects is that they are constantly being improved and updated by a community of developers. Any bugs or issues will likely be detected and fixed quickly, ensuring that the software remains stable and reliable. Additionally, open source projects often have extensive documentation, making it easier to understand how the software works and how to integrate it into your system.

Luos, like so many other open source projects, has built an active community of developers and users to help everyone exchange information and knowledge. If you try Luos, you're not alone. You can get help and share your experience with others.

Use a flexible bootloader

One way to update an edge or embedded system is to use a bootloader. Bootloaders can update the binaries of your system (at Luos, we call it a "node").

This can be useful if you want to improve your system's capabilities, fix a bug, or make it more adaptable based on your increasing needs. In the context of open source, choosing a bootloader that is part of an open source project means taking advantage of the collaborative work of many experienced developers from the same field as the users.

Another advantage of open source projects (and significantly for a bootloader feature) is that they are usually well-tested thanks to a wide user base. They have often been used in a variety of different environments.

A proprietary bootloader works with the system it was designed for but is unusable with other devices. Most of the time, developers don't develop bootloaders and perform updates on individual boards one at a time. A bootloader might work with more than one board or brand in very few cases.

Two central concerns typically arise when using a bootloader: Its adaptability with the boards used and its ease of use.

At Luos, we've chosen to develop a bootloader that allows developers to update any board, whatever the microcontroller provider, based on its hardware abstraction layer (HAL). If your system is composed of ST, Raspberry Pi, and Arduino boards, you can update all the binaries of the system without updating each board individually.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge What is edge machine learning? The latest on edge

With the Luos open source bootloader, it's no longer necessary to dismantle your system each time you need to update. You don't have to physically access the board for updating each board individually.

While many frameworks offer flexible solutions for their own products (allowing you to update them remotely by WiFi, for example), few projects provide compatibility with other brands. The strength of Luos is that it's compatible with many MCUs. A developer doesn't have to creeate several specific bootloaders for a system that uses boards with different capabilities.

By using a bootloader with this type of flexibility, a developer can easily configure the bootloader to fit the firmware and system requirements. By adding a WiFi gateway, you can update an entire system remotely, regardless of the board's brand. This is a considerable time and cost savings, especially with large-scale projects.

Using a flexible bootloader is an effective way to update an edge or embedded system. Of course, it's important to test your updates before deploying them in a production environment, and with Luos, you can focus your efforts on code tests rather than bootloader tests.

Wrap up

Updating edge and embedded systems can be complex, but using open source projects can make it easier. Open source projects are developed and maintained by a large community of volunteers, which means they are constantly being improved and updated. They also have extensive documentation and are often well-tested, making them compatible with a wide range of hardware and software configurations.

If you want to know more about how the Luos bootloader works, visit the project's technical guide.

Luos's open source bootloader updates all the boards of your edge or embedded system through one physical connection to your network without requiring physical access to the other boards.

Edge computing Internet of Things (IoT) Networking What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What you need to know about software bills of materials

Tue, 01/24/2023 - 16:00
What you need to know about software bills of materials zvr Tue, 01/24/2023 - 03:00

Modern software development is incredibly complex. Software nowadays is always comprised of a combination of components. These components are typically modules and libraries called by other code or even standalone programs that are used in conjunction with other programs.

Until a few years ago, the 80/20 rule was valid: in any significant piece of software, 80% of the content should not be yours. It makes no economic sense to try to develop more than 20% of any software because it's likely someone has already built components with the necessary functionality. Instead, focus on developing what gives you a competitive advantage. In recent years, this balance might have even shifted to 90/10.

That's where the software bill of materials (SBOM) comes in. It's a formal record containing details and supply chain relationships of all the components used in building software. These components can be open source or proprietary, freely available or paid-for, widely available or access-restricted. The information present in an SBOM can be used in a multitude of ways, helping answer various contractual, legal, or technical queries about the software.

Early efforts for providing SBOMs were mostly spearheaded by the desire for legal compliance. Every software component is under a specific license, which might impose some obligations on its use. In order to be legally compliant, one must satisfy all the obligations of all the licenses. This is straightforward, but not easily accomplished. An obvious first step is to have a record of all components and all licenses, which is exactly what an SBOM is.

However, in the past couple of years, as a result of software supply chain attacks, the driving force behind SBOM adoption and the need to know the exact components inside each piece of software has been security. SBOMs are now expected to accompany some types of software delivery. For example, the United States Executive Order (EO) 14028 advises US government agencies to start requiring SBOMs for any hardware or software product they acquire.

What is a software bill of materials (SBOM)?

At a conceptual level, an SBOM is like a simple table of contents: it's a comprehensive list of software components, with information on name, version, origin, and possibly additional information about licensing, vulnerabilities, provenance, or any other areas of interest. Because it can be easily understood, this information can be expressed in several formats: as a table, as a text document, as a spreadsheet, and so on. For the information to be useful, the same format should be understood and agreed upon by both members of an exchange.

Software Package Data Exchange (SPDX)

More than ten years ago, a group of interested individuals representing various companies started working on the problem of defining a common, standardized format that they called Software Package Data Exchange (SPDX). Everyone agreed that this standard should not be a competitive advantage for any specific company, so the work progressed following the open source principles completely, with open participation by anyone who wanted to contribute.

SPDX is an open standard for communicating SBOM information. Last year it was ratified as the international standard ISO/IEC 5962:2021. The SPDX specification is produced in a collaborative way gathering a large number of participants, organized into working groups according to their interests and expertise. Intel has been an active participant in many groups since the beginning, such as the technical team defining the SPDX specification, the legal team working on the SPDX License List, and the outreach team promoting the use of SPDX.

The approach taken by SPDX is that the information present in an SBOM should be factual. For example, it simply records the license declared for each software component and avoids legal interpretations of license terms or obligations. Another important characteristic of SPDX is that the information can be encoded in a variety of formats, like pure text with minimal structure, JSON, XML, RDF, and even spreadsheets.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The structure of an SPDX document is hierarchical. In addition to information relevant to the document itself, like author and date, the information is presented at levels of increasing granularity, corresponding to packages, files, or snippets. Almost all the information at every level is optional, so one can generate an SBOM giving a general view or one containing information in excruciating detail. The flexibility of the format makes it ideal for any number of real-world use cases. For example, a recipient of an SBOM might only be interested in security vulnerability information, while another might care about which licenses the different components are under and the legal obligations they impose.

A number of tools can handle SPDX documents. Depending on the functionality and the precise point in the software supply chain where the tool operates, one can have a full taxonomy of tools. For example, the SPDX document might be produced while software is being built or it might be generated afterward by analyzing the software already built. Other tools consume this information and can analyze, transform, compare, or merge SPDX documents.

Working groups are currently designing the next major release version. SPDX version 3 is a major effort, restructuring the SBOM information into modular, compartmentalized sections. This will make it possible, for example, to have an SBOM with special emphasis on security and vulnerability information and less content on licensing details. Given the ever-increasing use cases for SBOMs, this modular approach is expected to result in more widespread adoption.

Intel is planning to introduce SBOMs to accompany its software offerings in 2023. Meanwhile, members of will also continue to actively participate in the efforts of defining the new versions of SPDX.

This post is based on a recent talk given at the South Tyrol Free Software Conference (SFScon), you can catch the video and check out the slides here.

Get to know the concepts of software bills of materials (SBOMs) and the basic elements defined in Software Package Data Exchange (SPDX).

Image by:

Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create your own website with Joomla!, an open source CMS

Mon, 01/23/2023 - 16:00
Create your own website with Joomla!, an open source CMS Shane Barker Mon, 01/23/2023 - 03:00

Joomla! is among the leading open source content management systems (CMS) for publishing web content. It's user friendly, accessible, extensible, responsive, and multilingual. What's more, it's also search engine optimized. No wonder Joomla! has a 3.5% share of the content management system market.

In this article, I'll introduce you to Joomla! and why I think it's an excellent choice for your website or online application.

Joomla! is open source

A CMS software application lets you build and manage a website without coding. In fact, you can use a CMS even if you don't know how to code at all. You can quickly create, modify, manage, and publish content with a good CMS. You can also customize the design and functionality of the website by using several pre-built templates and extensions. If you're looking to set up a site for e-commerce, for example, Joomla! is one of the easiest options available.

Here are a few features:

  • Content Management: Create and publish content through a simple web interface.
  • User Management: Create user accounts with varying levels of authority so that any number of people can contribute or help maintain your site.
  • Media Manager: Enables users to upload media to the site through the web interface.
  • Templates: Change the look of your site. Templates are available from many third-party sources and are easy to apply.
  • Banner Management: Use standardized banners to advertise special deals or upcoming events to site visitors.
  • RSS: Allows visitors to subscribe to your website and get updates when you publish new content.
Joomla! Extensions Directory (JED)

The Joomla! Extensions Directory (JED) is a neatly organized directory of Joomla! extensions. It's the official Joomla! website that lets you explore hundreds of extensions available as plug-ins, modules, and components.

All extensions on the website are organized by category and rating and include user reviews. This also makes it easy for you to compare different extensions before starting. Additionally, you can search the JED by keyword, category, tag, or simply browse to explore the various options.

Each category is notated with the number of available extensions in it. Hover your mouse over an individual category to see a list of subcategories to refine your search.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

Install Joomla!

The first step is installing Joomla! There are several ways you can go about this.

If you want to launch your website within minutes, you can use Joomla! online. However, this process has certain limitations. You must renew your website every 30 days to keep it available. This feature is often used for demonstrations or to try Joomla! and is not preferred for a permanent website.

The second option is a manual installation. This choice encompasses downloading Joomla! for free and then using it on your web host's server. This process requires you to perform the following steps:

  • Create an SQL database for your website.
  • Upload all the Joomla! files to your website's root directory.
  • Run the Joomla! configuration wizard, available at your site's URL.
  • Once complete, configure the database.
  • Install the sample data and configure your email.
Image by:

(Ryan Johnson, CC BY-SA 4.0)

Create content on Joomla!

After the installation, access your Joomla! website by logging in with your admin credentials. Once you log in, you're directed to your website's control panel.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

This process probably feels familiar if you've used WordPress, Drupal, or a similar CMS. Click the New Article link in the control panel to get started posting content. This opens an editor where you can compose an article.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

You can write the content in the main window and use the options available in the above rows to format and edit the text, add new elements, add links to the content, and so on.

Once you've written your article, you can save and publish it to the website. And to do that, you must return to the control panel and look at the options on the right-hand side. You can assign the article a category, add some tags, and determine its visibility levels. Finally, click the Save button, and view the article on your website.

Add extensions to your website

The next step entails expanding the site's functionality by adding Joomla! extensions. Extensions are collections of code you can install on your website to enjoy additional features. With these extensions, even new developers can launch a fully functional Joomla! website with ease.

The Joomla! Extensions Directory offers various categories, enabling you to choose from several extensions.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

Once you've chosen a specific extension, click on it to open its main page. This main page offers additional information and provides a download link. You can click on the link and save the extension to your computer. To install it on your site, return to the control panel and click the Install Extensions link on the left.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

Clicking on this button takes you to the Extensions page.

Image by:

(Ryan Johnson, CC BY-SA 4.0)

Drag and drop the ZIP file containing the extension onto the page. Alternatively, you can click on the Or browse for file button. Once you have downloaded and installed the extension, you can configure it and use it to enhance your site's functionality.

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives Use templates to change your website's appearance

Joomla! websites have a standardized look and feel. However, you can change this appearance by using smart Joomla! templates. These templates modify the appearance and layout of your site. Just like extensions, there are lots of templates to choose from. Joomla! lets you use more than one template on your site.

Once you have chosen a template for your site, download it as a ZIP file. To install it, return to your site's control panel and install the new template just like you installed an extension: Drag and drop the template's ZIP file or select it from a file chooser.

Clicking on the Templates button in the control panel takes you to the template manager. This screen displays all installed templates, and it's here that you can change between templates.

Wrap up

Joomla! is an excellent and user-friendly CMS platform that lets you easily build a powerful website. The platform is relatively straightforward to get started, and it has plenty of documentation. There's also an active Joomla! community that can help you along the way. If you're thinking of starting a website, try Joomla!

Get to know the many features of the Joomla! open source content management system.

Image by:

Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 predictions for open source in confidential computing

Mon, 01/23/2023 - 16:00
3 predictions for open source in confidential computing Dpal Mon, 01/23/2023 - 03:00

It's a new year, which means it's time to predict what the next year will bring regarding future tech trends. After guessing the World Cup champion, I feel confident sharing my personal perspective on the confidential computing market in the near future.

What is confidential computing?

Confidential computing is the practice of isolating sensitive data and the techniques used to process it. This is as important on your laptop, where your data must be isolated from other applications, as it is on the cloud, where your data must be isolated from thousands of other containers and user accounts. As you can imagine, open source is a significant component for ensuring that what you believe is confidential is actually confidential. This is because security teams can audit the code of an open source project.

Confidential computing is a big space. When I talk about confidential computing, I first think of workloads running inside trusted execution environments (TEE). There are several categories of such workloads:

  • Off-the-shelf products provided by a vendor
  • Products built by a third party that need to be adapted and integrated into the customer environment
  • Applications built and run by companies in support of their business
Off-the-shelf security products

Applications in this category already exist, and are expected to mature over the course of the year. The number of these applications is also expected to grow. Examples of applications include hardware security modules (HSM), security vaults, encryption services, and other security-related applications that render themselves to be the first choice for adopting confidential computing. While these applications exist, they constitute a fraction of the potential workloads that can run inside a TEE.

Third-party enablement applications

Workloads in this category are the ones built by software vendors for other customers. They require adaptation and integration for use. A vendor who makes this kind of software isn't a security vendor, but instead relies on security vendors (like Profian) to help them adapt their solutions to confidential computing. Such software includes AI software trained on customer data, or a database holding customer data for secure processing.

Homemade applications

These applications are built by customers for their internal use, leveraging assistance and enablement from confidential computing vendors.

Developing confidential computing technology

I suspect that third-party and homemade applications have similar dynamics. However, I expect more progress in a third-party enablement application segment, and here is why.

In the past year, a lot of discovery and educational activities were developed. Confidential computing is now better known, but it has yet to become a mainstream technology. The security and developer communities are gaining a better understanding of confidential computing and its benefits. If this discovery trend continues this year, it can influence more outlets, like conferences, magazines, and publications. This shows that these entities recognize the value of confidential computing. In time, they may start to offer more airtime for talks and articles on the subject.

Prediction #1: Pilot programs

The next phase after discovery is creating a pilot. Profian is seeing more interest among different vendors to move forward in building solutions and products that consciously target execution within trusted environments. This year, I expect to see a lot of pilot programs. Some of them can become production ready within the year. And some can pave the way for production-ready implementation next year.

Further interest is generated by greater visibility of confidential computing, a better understanding of the technology, and its value. In addition, the success of pilots, actual products, and services based on confidential computing platforms is guaranteed to generate interest.

Over the years, companies have collected and stored a lot of data about their business. If used using analytics and AI, this data helps companies improve business operations. They can also offer new or improved services and products to customers. Some of the data and models are valuable and need to be handled with security in mind. That's an ideal use case for confidential computing.

Companies looking to put their data to good use should start asking questions about security. This eventually leads them to discover confidential computing. From there, they can express interest in leveraging trusted environments to do computation. This, in turn, grows the attention of the companies (in the third-party category above) that provide products in this space to consider putting some of their products and offerings into confidential computing. I don't expect to see drastic changes in this area during this year. I do anticipate a shift in mindset toward recognizing the value of confidential computing and how it can help on a greater scale.

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles Prediction #2: Hardware and confidential computing

This year, I expect new hardware chips supporting confidential computing from different vendors and architectures. The hardware ecosystem is growing and that should continue this year. This gives more options to consumers, but also creates more requirements for hardware-agnostic solutions.

Prediction #3: Open standards

Finally, multiple security vendors are working on different deployment and attestation solutions. As those solutions mature, the need for some kind of interoperability is expected. Efforts for standardization are underway. But this year is likely to bring more pressure for projects to agree upon standardization and rules for interoperability.

Open source in confidential computing

Open source is key in confidential computing. The Enarx project provides a runtime environment, based on WebAssembly. This allows deploying a workload into a TEE in an architecture- and language-indifferent way. With the general awareness trends I've described above, I expect more engineers to join the open source ecosystem of confidential computing projects. This year, more developers might contribute to all elements of the stack, including the kernel, WebAssembly, Rust crates and tools, and Enarx itself.

Maybe one of those developers is you. If so, I look forward to collaborating with you.

Confidential computing is becoming more widely known by security and developer communities. Look out for these key trends in 2023.

Image by:

Tumisu. CC0

Security and privacy What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Linux rescued precious audio files with FFmpeg

Sun, 01/22/2023 - 16:00
How Linux rescued precious audio files with FFmpeg Don Watkins Sun, 01/22/2023 - 03:00

Recently I was asked by a customer to create compact discs of priceless family recordings. My client insisted that the media be delivered as compact discs and not as digital files in an MP3 player or other similar device. One of the source recordings was on a compact disc and in AIFF format. As such my client could not play this media that contained her husband's voice. I was able to convert it using Audacity, and then was able to burn it to a compact disc with Brasero, which has been my go to CD creation tool.

The balance of the audio files were in MP3 format. I was able to create compact discs with Brasero very quickly. There was, however, one file that was so large that it exceeded the capacity of the compact disc medium. This large file contained nearly two hours of audio. The capacity of compact discs is 72 minutes.

This presented a problem. How could I split the large file into smaller segments that would allow me to create media and fit on media that my client could use? I decided to use a DVD instead of a compact disc. Using a DVD provided me with a much larger capacity disc, but how could I convert the MP3 files to a format that would allow me to create a DVD? I tried using HandBrake, but was unable to convert MP3 to MP4 format because MP4 expected a video stream, and I had no video. Then I discovered that I could use FFmpeg to convert the files.

Convert media files with FFmpeg

If you're looking for a powerful tool to help you with your audio and video files, look no further than FFmpeg. FFmpeg is highly versatile and able to support an impressive range of popular formats like MP3, MP4, and AVI. You can also use it to convert files between different formats, which was very useful in my case.

You can easily install FFmpeg on your Linux system in a terminal on Fedora and similar distributions:

$ sudo dnf install ffmpeg

On Debian and similar distributions:

$ sudo apt install ffmpeg

According to its man page, "FFmpeg is a very fast video and audio converter that can also grab from a live audio and video source. It can also convert between arbitrary sample rates and resize video on the fly with a high-quality polyphase filter." FFmpeg has excellent documentation in addition to an extensive man page.

The command-line interface of this tool might seem daunting for newcomers, but this feature is what makes it so powerful. Developers and system administrators can easily write scripts to automate complex tasks. If you make the most of this feature you can streamline your workflow like a pro.

Using the command-line interface, I was able to convert the MP3 file to the required MP4 format using the following command:

$ ffmpeg -f lavfi -i color=c=black:s=1280x720:r=5 \ -i audio.mp3 \ -crf 0 -c:a copy -shortest output.mp4

The -f lavfi option sets a virtual input device as the source of the video stream. Essentially, this creates a video file (which is what a video DVD requires) instead of an audio file. The audio file I actually care about gets included thanks to the -i audio.mp3 option. The video that gets created is a black screen, as defined by -i:


More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I ran into a snag using Brasero with this new MP4 file. Brasero would not create a DVD without the addition of a couple of cstreamer codec. From some quick research, I found another open source DVD creation program called DevedeNG that had everything I needed built-in. Upon installing the DevedeNG program, I was able to create the DVD media in 20 minutes. Your time may vary, depending on your computer system. DevedeNG is licensed under GPLv3.

Solving problems with open source

FFmpeg is licensed under the GNU Public License. FFmpeg is always evolving! The project is actively maintained and updated on a regular basis. This gives you the latest features, improvements, and bug fixes so you can rest easy knowing your audio and video formats are supported.

Another way I could have resolved the space issue was by burning the MP3 audio as data files onto the DVD, leveraging the 4 GB of space available on the DVD for just the audio data. The DVD would have basically been, in that scenario, a hard drive. You'd insert it into your computer and listen to the MP3 file through music player software.

The way I ended up burning the audio DVD created a media DVD, which is recognized by either a computer or a DVD player. Because there's an "empty" video stream (it's actually not empty, it has black pixels in it), DVD players recognize the media as a movie. This means that when you listen to the audio track, you're actually watching a blank video with accompanying audio.

There's no right or wrong way to solve these puzzles. What's important is that you know how to get to the place you need to be. The goal was to preserve audio that was of particular importance to my client, and open source made it possible.

FFmpeg is a highly versatile tool that supports a range of popular formats like MP3, MP4, and AVI. You can also use it to convert files between different formats.

Image by:

WOCinTech Chat. Modified by CC BY-SA 4.0

Linux Audio and music What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Reflecting on my first Linux conference

Sat, 01/21/2023 - 16:00
Reflecting on my first Linux conference skuchere Sat, 01/21/2023 - 03:00

I was left in awe when I first became aware of Linux and other open source software in 2007. Could it be that there is software that people lovingly contribute to, update, and document, and it remains free? I attended the Ohio Linux Fest in Cleveland, Ohio, in 2008 (now called Open Libre Fest) and was absolutely astounded.

I was not only an attendee of the conference, but I also volunteered to help ensure it ran smoothly. I had the job of signaling the speakers, so they knew when they were running close to the end of their window of time. It was a unique experience, as I was not only a spectator but also worked behind the scenes.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

The insight I gained into the world of open source software was invaluable, and it got me hooked on the world of free software. I got to listen to various speakers talk about their projects. They were all unique projects, yet also very forward-thinking in their ideas and fantastic in their application.

Getting involved with open source

My favorite topic was Gcompris, an application designed to teach children how to use a computer running the Linux OS. The graphics were bold, colorful, and clearly intended to get children (hey, and adults, too) involved in the UI. There was also a very inspirational speaker who spoke about the future of Linux and how it would advance to make all software free to share, modify, and improve. Fourteen years later, and I think Linux is well on its way to its promise of free everything!

Another thing I noticed at the conference was the diverse groups of people in attendance, all there to learn about Linux. The convention guests were a genial group, whether you knew everything about Linux or nothing at all. This community needs every sort of skill set, and everyone was accepted regardless of previous Linux experience.

What was your first technical conference? Which ones are you planning on attending in the coming year?

Attending (and working) my first Linux conference was an eye-opening experience in the world of open source.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to Java for loops

Fri, 01/20/2023 - 16:00
A guide to Java for loops sethkenlon Fri, 01/20/2023 - 03:00

In programming, you often need your code to iterate over a set of data to process each item individually. You use control structures to direct the flow of a program, and the way you tell your code what to do is by defining a condition that it can test. For instance, you might write code to resize an image until all images have been resized once. But how do you know how many images there are? One user might have five images, and another might have 100. How do you write code that can adapt to unique use cases? In many languages, Java included, one option is the for loop. Another option for Java is the Stream method. This article covers both.

For loop

A for loop takes a known quantity of items and ensures that each item is processed. An "item" can be a number, or it can be a table containing several entries, or any Java data type. This essentially describes a counter:

  • Starting value of the counter

  • Stop value

  • The increment you want the counter to advance

For instance, suppose you have three items, and you want Java to process each one. Start your counter (call it c) at 0 because Java starts the index of arrays at 0. Set the stop value to c < 3 or c ⇐ 2. Finally, increment the counter by 1. Here's some sample code:

package com.opensource.example; public class Main { public static void main(String[] args) { String[] myArray = {"zombie", "apocalypse", "alert" }; for (int i = 0; i < 3; i++) { System.out.printf("%s ", myArray[i]); } } }

Run the code to ensure all three items are getting processed:

$ java ./ zombie apocalypse alert

The conditions are flexible. You can decrement instead of increment, and you can increment (or decrement) by any amount.

The for loop is a common construct in most languages. However, most programming languages are actively developed, and certainly, Java is always improving. There's a new way to iterate over data, and it's called Java Stream.

Java Stream

The Java Stream interface is a feature of Java providing functional access to a collection of structured data. Instead of creating a counter to "walk" through your data, you use a Stream method to query the data:

package com.opensource.example; import java.util.Arrays; import; public class Example { public static void main(String[] args) { // create an array String[] myArray = new String[]{"plant", "apocalypse", "alert"}; // put array into Stream Stream myStream =; myStream.forEach(e -> System.out.println(e)); } }

In this sample code, you import the library for access to the Stream construct, and java.util.Arrays to move an array into the Stream.

In the code, you create an Array as usual. Then you create a stream called myStream and put the data from your array into it. Those two steps are broadly typical of Stream usage: You have data, so you put the data into a Stream so you can analyze it.

What you do with the Stream depends on what you want to achieve. Stream methods are well documented in Java docs, but to mimic the basic example of the for loop it makes sense to use the forEach or forEachOrdered method, which iterates over each element in the Stream. In this example, you use a lambda expression to create a parameter, which is passed to a print statement:

$ java ./ plant apocalypse alertWhen to use Stream

Streams don't really exist to replace for loops. In fact, Streams are intended as ephemeral constructs that don't need to be stored in memory. They're intended to gather data for analysis and then disappear.

For instance, watch what happens if you try to use a Stream twice:

package com.opensource.example; import java.util.Arrays; import; public class Example { public static void main(String[] args) { // create an array String[] myArray = new String[]{"plant", "apocalypse", "alert"}; // put array into Stream Stream myStream =; long myCount = myStream.count(); System.out.println(myCount); } }

This won't work, and that's intentional:

$ java ./ Exception in thread "main" stream has already been operated upon or closed

More on Java What is enterprise Java programming? An open source alternative to Oracle JDK Java cheat sheet Red Hat build of OpenJDK Free online course: Developing cloud-native applications with microservices Fresh Java articles

Part of the design philosophy of Java Stream is that you bring data into your code. You then apply filters up front rather than writing a bunch of if-then-else statements in a desperate attempt to find the data you actually want, process the data, and then dispose of the Stream. It's similar to a Linux pipe, in which you send the output of an application to your terminal only to filter it through grep so you can ignore all the data you don't need.

Here's a simple example of a filter to eliminate any entry in a data set that doesn't start with an a:

package com.opensource.example; import java.util.Arrays; import; public class Example { public static void main(String[] args) { String[] myArray = new String[]{"plant", "apocalypse", "alert"}; Stream myStream = -> s.startsWith("a")); myStream.forEach(e -> System.out.println(e)); } }

Here's the output:

$ java ./ apocalypse alert

For data you intend to use as a resource throughout your code, use a for loop.

For data you want to ingest, parse, and forget, use a Stream.


Iteration is common in programming, and whether you use a for loop or a Stream, understanding the structure and the options you have means you can make intelligent decisions about how to process data in Java.

For advanced usage of Streams, read Chris Hermansen's article Don't like loops? Try Java Streams.

This article covers how to use the Java for loop and the Java Stream method.

Image by:

Pixabay. CC0.

Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Community thinking patterns and the role of the introducer-in-chief

Thu, 01/19/2023 - 16:00
Community thinking patterns and the role of the introducer-in-chief Ron McFarland Thu, 01/19/2023 - 03:00

I recently studied some research by Dave Logan, Bob King, and Halee Fischer-Wright, who looked at what I call productive and counterproductive communities. Community is an important open organization principle. These researchers define it as a group of 20 to 150 people who know each other enough to say hello on the street and influence or impact each other. They give suggestions on guiding people out of counterproductive communities and relationships and into productive ones through introductions to people who have gone through that process.

Their study suggests many of the same collaboration concepts I talked about in my article on the book Team of Teams: New Rules of Engagement for a Complex World. The context between the two studies is very different, but they both come to similar conclusions about the flow of communication (here again, the open organization principle is widely applied).

The researchers believe that cultures determine a common dominant language, topics of conversation, feelings, and behavior. It's what determines the environment you live and work in. You are either energizing or draining people of their energy (consuming their energy and motivation). A group eventually expels those that speak a different language or behave unacceptably.

According to the researchers, the effectiveness of a community is based on five stages of culture:

  1. Counterproductive to the members themselves and their surrounding society.
  2. Barely productive.
  3. Generally productive.
  4. Very productive.
  5. Extremely productive.

Each stage represents a specific way that a community thinks, behaves, and speaks.

#1 is the most pessimistic environment, and #5 is the most optimistic. As a leader, your job is to encourage members to leave #1 and join #5. I discuss community cultural thinking patterns #1-3 in part one of this article and thinking patterns #4-5 in part two. I also include a fictional account of a person moving through all five patterns.

What community members talk about and how they speak to each other determines which of the five community thinking patterns they are in. Larger communities could have several sub-community thinking patterns in play at the same time.

What is community leadership?

When you're a community leader, you have the respect and trust of the community. You are guiding the community, and you must consider two main aspects of it.

The first aspect is the ever-changing forward and backward behavior of each member. What are you doing to move the purpose of the community forward (not backward)? Your full attention is on measurable performance, not simple beliefs or attitudes. Leaving the community might be a good option if you cannot improve the community's greater good purpose.

The second aspect is how you communicate and speak within the culture or community. The researchers believe each of the five cultural thinking patterns has its own language and way of speaking and behaving. That culture must pass through each of the five thinking patterns toward more productivity. The language and behavior you encourage determine advancement. Your goal is to improve the language and move the behavior in a more productive direction.

As you guide the community, the community also guides you. It can be either supportive or resistant. This interaction determines the pace of community advancement and forces you to adapt where appropriate or simply to leave the community altogether.

With that understanding, you can introduce members to others that want to be more productive and those that have gone through similar development issues. You redirect questions to continue discussions and turn praise toward the community members themselves.

Behavior and outwardly or quietly spoken attitudes

With those thoughts, according to the researchers, a community leader listens to the language within the culture and attempts to move the community to the next higher cultural thinking pattern.

As a leader, you must discourage one type of behavior and language, replacing it with another. If a member is unwilling to change either language or conduct, that community member might have to be removed from the community.

These community thinking patterns can be seen through current behaviors and what people say. They are not fixed situations but evolve over time. The researchers have learned that behaviors can go up and down and continually move in strength and intensity when closely observed (never, rarely, occasionally, often, always).

Community thinking pattern #1 environment

According to the researchers, the people in this thinking pattern are alienated from others but believe that "misery loves company," and this is their family. "Life is miserable," but everyone is in it together. The community's feeling and behavior expresses despair and hostility. Members are abusive and could be violent to others and even self-destructive.

In a business setting, these people could steal quietly from their company and feel no shame when caught.

In terms of collaboration, they feel so alienated from everyone that they don't interact with people at all.

Feelings within early community thinking pattern #1 Image by:

(Ronald McFarland, CC BY-SA 4.0)

This person believes all bad behavior is justified, and everyone acts this way.

What is the job of the community introducer-in-chief? According to the researchers, you have to identify people who want to improve their situation and are willing to work on it. Then, you convince these people that they have the choice to improve themselves and their situation. You might introduce them to a community member who used to be in similar circumstances.

Community thinking pattern #2 environment Image by:

(Ronald McFarland, CC BY-SA 4.0)

These community members (and collective culture) think that others have power they lack. They feel trapped and that their life is terrible. In the researcher's words, "my life is miserable" expresses the feelings of these members.

These people form groups that complain about their environment and community. They feel like victims and therefore do the minimum to get by. In a business setting, these people do what the job requires and no more. They are good at pretending to be busy. They believe they can't be creative in their work, so they're disengaged. In terms of collaboration, they feel separated from anyone in authority.

First, identify this behavior and language (usually, some variation on "my life is miserable"). As the introducer-in-chief, you can direct the person to individuals who may be helpful, particularly others who have recently struggled with their own life challenges but have been successful and want to continue progressing.

Encourage one-on-one interactions between those two people, so the two can find a way forward. The leader should encourage the person to do this repeatedly with many people, as there is never one path, and everyone is different. If people have gone through similar problems, they are often very happy to mentor others.

Through these one-on-one discussions, a vision forward can surface. People can find their strengths through these discussions. Also, other support resources might come to mind. Even skill development activities that address weaknesses can be explored. Together they can consider easy, quick-reward projects that achieve something and build skills and confidence along the way. These projects should be self-paced to ensure they are energizing and not exhausting.

If you're successful, the person starts to use positive language and exhibits skills others don't have. A small sense of superiority may even develop in specific areas when compared with other people.

Feelings within early community thinking pattern #2

In the new environment, this person sees potential, not yet fully realized but getting nearer. A person fears going backward in early community thinking pattern #2. They do one of these things:

  • Find a thinking pattern #2 community and stay there.
  • Adjust their attitude.
  • Revert to their previous bitter lifestyle and environment.

The job of the community introducer-in-chief is to identify people who are willing to work to improve their situation. Next, introduce those people to individuals who have struggled with the same feelings and environment but have learned ways to be personally successful.

Feelings within middle community thinking pattern #2

These people tend to attract other thinking pattern #2 people, share a similar attitude of being outside the main decision-making action, and jointly complain about something keeping them from advancing. They accept that as unavoidable and like to mock their bad bosses.

Feelings within late community thinking pattern #2

These people feel life is miserable but badly want to move to a better life. They think having an unsatisfying life is temporary and believe it can improve. From this point, it's possible to move to the next thinking pattern.

Community thinking pattern #3 environment

These people (and their collective culture) are individually high performers and are extremely skilled in their area of specialty. They work in one-to-one relationships to get others to help them personally advance.

Their attention is not particularly concerned with others or the community overall. In the researcher's words, "I'm great" (and indirectly, "you're not") is the predominant feeling. You often hear "I," "me," and "my" from these people, according to the researchers.

In relationships with other people, these community members consider others as competitors. Sometimes, they hide attacks on others through humor. In any situation, their goal is to control and dominate others. They attract community thinking pattern #2 people that are just willing to take orders and do no more.

Regarding collaboration, they only discuss issues individually with their personal goals in mind. Furthermore, they tend to hoard information and are not inclusive or transparent.

According to the researchers, a common complaint heard from these people is that they never have enough time to get things done, and they want to achieve far more than they currently do. Also, they consider many around them uncooperative, poorly skilled, or unmotivated. After identifying this behavior and how the person speaks, the introducer-in-chief's action plan should encourage the person to work in larger groups (three-person groupings on a given project) and not one-on-one only. Introduce like-minded people to each other, and enable them to help one another.

Encourage the person to work on projects they cannot achieve alone but could with the right community. They should work on projects that require partnerships.

The researchers believe these community members must be convinced that their talent can only get them so far. Through working with others, far more can be achieved. The increased power comes not from added ability but from wide-ranging networks that have both knowledge and usable skills.

This network development can be achieved by being more transparent with what is known. This approach stimulates others to be transparent and to reach for greater achievements.

Feelings within early community thinking pattern #3

According to the researchers, these people found abilities they could develop. Working on those skills alone, they get praise for their progress. They feel there is more to be done and that only they can do it successfully.

Secretly, they're still uncertain about how well they perform, feeling they have to prove themselves to others (and themselves). Furthermore, they feel they are only worth as much as their performance. On good days, they feel confident. On bad days, they feel stuck, powerless, and out of control (which leads back to feelings of community thinking pattern #2).

They are the most comfortable working with people with the same talent, skills, and drive as they have.

A key difference between community thinking patterns #2 and #3 is a passion for personal success and self-reliance, as opposed to a feeling of powerlessness.

Learn about open organizations Download resources Join the community What is an open organization? How open is your organization? Feelings within middle community thinking pattern #3

There's a transition to the middle of community thinking pattern #3 that happens when people find a group that accepts them for their gifts and skills. Sometimes, it is a mentor who offers this respect that is needed. Over time, a more solid community support system stabilizes with others with similar skills, performance, and professional drive.

These people find others with similar skills in the same specialty or equivalent skills in other specialties. Generally, the group has a similar level of professional confidence. This leads to a mild level of competition or one-upmanship among them. They respect each other within a competitive atmosphere.

They expect others to have the same drive as they have and are disappointed when others don't put in the same effort. They will likely transition into management at this point, warranting their own staff.

Feelings within late community thinking pattern #3

The late stage of community thinking pattern #3 level is when a person maintains a one-on-one relationship with other workers (often their staff) and has a strong management style. They give orders and confirm that tasks have been completed. They discourage staff from discussing things with each other without a leader in attendance.

Image by:

(Ronald McFarland, CC BY-SA 4.0)

These managers always complain that there is not enough time to do what needs to be done. They are either doing things directly or running around supervising countless staff activities.

The community introducer-in-chief's job is to identify people who want to improve performance and work with other people. As the introducer-in-chief, ask managers to let go of control. Introduce managers to individuals with skills they can partner with. Start by introducing two people to each other, and then step aside and let those two work together.

It's at this point that the principles of the Open Organization come into play.

Patterns #4 and #5

Part 2 of this article discusses cultural thinking patterns #4 and #5. It also provides an example of how introducers can influence an individual and lead them toward the next more positive community.

The introducer-in-chief introduces two people to each other, and then steps aside and lets those two work together. That's when the principles of the Open Organization come into play.

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to fix an IndexError in Python

Thu, 01/19/2023 - 16:00
How to fix an IndexError in Python vijaytechnicalauthor Thu, 01/19/2023 - 03:00

If you use Python, you may have encountered the IndexError error in response to some code you've written. The IndexError message in Python is a runtime error. To understand what it is and how to fix it, you must first understand what an index is. A Python list (or array or dictionary) has an index. The index of an item is its position within a list. To access an item in a list, you use its index. For instance, consider this Python list of fruits:

fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"]

This list's range is 5, because an index in Python starts at 0.

  • apple: 0
  • banana: 1
  • orange: 2
  • pear: 3
  • grapes: 4
  • watermelon: 5

Suppose you need to print the fruit name pear from this list. You can use a simple print statement, along with the list name and the index of the item you want to print:

>>> fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] >>> print(fruits[3]) pear What causes an IndexError in Python?

What if you use an index number outside the range of the list? For example, try to print the index number 6 (which doesn't exist):

>>> fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] >>> print(fruits[6]) Traceback (most recent call last): File "", line 2, in IndexError: list index out of range

As expected, you get IndexError: list index out of range in response.

How to fix IndexError in Python

The only solution to fix the IndexError: list index out of range error is to ensure that the item you access from a list is within the range of the list. You can accomplish this by using the range() an len() functions.

The range() function outputs sequential numbers, starting with 0 by default, and stopping at the number before the specified value:

>>> n = range(6) >>> for i in n: print(i) 0 1 2 3 4 5 5

The len() function, in the context of a list, returns the number of items in the list:

>>> fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] >>> print(len(fruits)) 6List index out of range

By using range() and len() together, you can prevent index errors. The len() function returns the length of the list (6, in this example.) Using that length with range() becomes range(6), which returns items at index 0, 1, 2, 3, 4, and 5.

fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] for i in range(len(fruits)): print(fruits[i]) apple banana orange pear grapes watermelonFix IndexError in Python loops

If you're not careful, index errors can happen in Python loops. Consider this loop:


>>> fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] >>> n = 0 >>> while n <= len(fruits) print(fruits[n]) n+=1 apple banana orange pear grapes watermelon Traceback (most recent call last): File "", line 4, in IndexError: list index out of range

The logic seems reasonable. You've defined n as a counter variable, and you've set the loop to occur until it equals the length of the list. The length of the list is 6, but its range is 5 (because Python starts its index at 0). The condition of the loop is n <= 6, and so thewhile loop stops when the value of n is equal to 6:

  • When n is 0 => apple
  • When n is 1 => banana
  • When n is 2 => orange
  • When n is 3 => pear
  • When n is 4 => grapes
  • When n is 5 => watermelon
  • When n is 6 => IndexError: list index out of range

When n is equal to 6, Python produces an IndexError: list index out of range error.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Solution

To avoid this error within Python loops, use only the < ("less than") operator, stopping the while loop at the last index of the list. This is one number short of the list's length:

>>> fruits = ["apple", "banana", "orange", "pear", "grapes", "watermelon"] >>> n = 0 >>> while n < len(fruits) print(fruits[n]) n+=1 apple banana orange pear grapes watermelon

There's another way to fix, this too, but I leave that to you to discover.

No more Python index errors

The ultimate cause of IndexError is an attempt to access an item that doesn't exist within a data structure. Using the range() and len() functions is one solution, and of course keep in mind that Python starts counting at 0, not 1.

Follow this Python tutorial to learn how to solve an IndexError.

Image by:

Yuko Honda on Flickr. CC BY-SA 2.0

Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 ways to use the Linux terminal to manage your files

Wed, 01/18/2023 - 16:00
5 ways to use the Linux terminal to manage your files sethkenlon Wed, 01/18/2023 - 03:00

A terminal is an application that provides access to the user shell of an operating system. Traditionally, the shell is the place where the user and the OS could interface directly with one another. And historically, a terminal was a physical access point, consisting of a keyboard and a readout (a printer, long ago, and later a cathode ray tube), that provided convenient access to a mainframe. Don't be fooled by this "ancient" history. The terminal is as relevant today as it was half a century ago, and in this article, I provide five common file management tasks you can do with nothing but shell.

1. Open a terminal and look around

Today, everyone's got a computer on their desk or in their bag. The mainframe-and-terminal model is now essentially emulated through an application. Your operating system might have a unique name for it, but generically it's usually known as a "terminal" or "console".

  • Linux: Look for Console, Konsole, or Terminal. Regardless of the name, you can usually launch it from your application menu using the key word "terminal."

  • macOS: The default terminal application isn't open source and is widely considered lacking in features. Download iTerm2 to get a feature-rich, GPLv2 replacement.

  • Windows: PowerShell is the open source terminal application, but it uses a language and syntax all its own. For this article to be useful on Windows, you can install Cygwin which provides a POSIX environment.

Once you have your terminal application open, you can get a view of your file system using the command ls.

ls2. Open a folder

In a graphical file manager, you open a folder by double-clicking on it. Once it's open, that folder usually dominates the window. It becomes your current location.

In a terminal, the thought process is slightly different. Instead of opening a folder, you change to a location. The end result is the same: once you change to a folder, you are "in" that folder. It becomes your current location.

For example, say you want open your Downloads folder. The command to use is cd plus the location you want to change to:

cd Downloads

To "close" a folder, you change out of that location. Taking a step out of a folder you've entered is represented by the cd command and two dots (..):

cd ..

You can practice entering a folder and then leaving again with the frequent use of ls to look around and confirm that you've changed locations:

$ cd Downloads $ ls cat-photo.jpg $ cd .. $ ls Documents Downloads Music Pictures Videos $ cd Documents $ ls zombie-apocalypse-plan-C.txt zombie-apocalypse-plan-D.txt $ cd .. $ ls Desktop Documents Downloads Music Pictures Videos

Repeat it often until you get used to it!

The advanced level of this exercise is to navigate around your files using a mixture of dots and folder names.

Suppose you want to look in your Documents folder, and then at your Desktop. Here's the beginner-level method:

$ cd Documents $ ls zombie-apocalypse-plan-C.txt zombie-apocalypse-plan-D.txt $ cd .. $ ls Desktop Documents Downloads Music Pictures Videos $ cd Desktop $ ls zombie-apocalypse-plan-A.txt

There's nothing wrong with that method. It works, and if it's clear to you then use it! However, here's the intermediate method:

$ cd Documents $ ls zombie-apocalypse-plan-C.txt zombie-apocalypse-plan-D.txt $ cd ../Desktop $ ls zombie-apocalypse-plan-A.txt

You effectively teleported straight from your Documents folder to your Desktop folder.

There's an advanced method, of this, too, but because you know everything you need to know to deduce it, I leave it as an exercise for you. (Hint: It doesn't use cd at all.)

3. Find a file

Admit it, you sometimes misplace a file. There's a great Linux command to help you find it again, and that command is appropriately named find:

$ find $HOME -iname "*holiday*" /home/tux/Pictures/holiday-photos /home/tux/Pictures/holiday-photos/winter-holiday.jpeg

A few points:

  • The find command requires you to tell it where to look.

  • Casting a wide net is usually best (if you knew where to look, you probably wouldn't have to use find), so I use $HOME to tell find to look through my personal data as opposed to system files.

  • The -iname option tells find to search for a file by name, ignoring capitalization.

  • Finally, the "holiday" argument tells find that the word "holiday" appears somewhere in the filename. The * characters are wildcards, so find locates any filename containing "holiday", whether "holiday" appears at the beginning, middle, or end of the filename.

The output of the find command is the location of the file or folder you're looking for. You can change to a folder using the cd command:

$ cd /home/tux/Pictures/holiday-photos $ ls winter-holiday.jpeg

You can't cd to a file, though:

$ cd /home/tux/Pictures/holiday-photos/winter-holiday.jpeg cd: Not a directory4. Open a file

If you've got a file you want to open from a terminal, use the xdg-open command:

$ xdg-open /home/tux/Pictures/holiday-photos/winter-holiday.jpeg

Alternately, you can open a file in a specific application:

$ kate /home/tux/Desktop/zombie-apocalypse-plan-A.txt5. Copy or move a file or folder

The cp command copies and the mv file moves. You can copy or move a file by providing the current location of the file, followed by its intended destination.

For instance, here's how to move a file from your Documents folder to its parent directory:

$ cd Documents $ ls zombie-apocalypse-plan-C.txt zombie-apocalypse-plan-D.txt $ mv zombie-apocalypse-plan-C.txt .. $ cd .. $ ls Documents Downloads Music Pictures Videos zombie-apocalypse-plan-C.txt

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

While moving or copying, you can also rename it. Here's how to move a file called example.txt out of the directory with the new name old-example.txt:

$ mv example.txt ../old-example.txt

You don't actually have to move a file from one directory to another just to rename it:

$ mv example.txt old-example.txtTerminal for files

The Linux desktop has a lot of file managers available to it. There are simple ones, network-transparent ones, and dual-panel ones. There are ones written for GTK, Qt, ncurses, and Swing. Big ones, small ones, and so on. But you can't talk about Linux file managers without talking about the one that's been there from the beginning: the terminal.

The terminal is a powerful tool, and it takes practice to get good at it. When I was learning the terminal, I used it for what I could, and then I opened a graphical file manager for advanced operations that I hadn't learned for the terminal yet. If you're interested in learning the how to use a terminal, there's no time like the present, so get started today!

Try one of these common file management tasks on Linux using just the shell.

Image by:

iradaturrahmat via Pixabay, CC0

Linux 24 Linux file managers 5 reasons I use the Dolphin file manager on Linux Try this Java file manager on Linux Feel like a Linux wizard with the Thunar file manager Make your Linux computer feel faster with the Xfe file manager Why you should try the Nemo file manager on Linux How the Linux Worker file manager can work for you Why I use the Enlightenment file manager on Linux Manage your file system from the Linux terminal A Linux file manager for Emacs fans How to use the Linux file manager for GNOME 2 Simplify your Linux PC with the PCManFM file manager A Linux file manager for Vim fans Try this Linux web browser as your file manager Experience Linux desktop nostalgia with Rox Enjoy two-panel file management on Linux with far2l 5 reasons to love Linux GNOME Files Travel back in time with the mc file manager on Linux Try this Python-based file manager on Linux Discover the power of the Linux SpaceFM file manager Explore the features of the Linux Double Commander file manager My 4 favorite features of the 4pane file manager on Linux 3 delightful features of the Linux QtFM file manager Experience the power of the Linux Krusader file manager How to use your Linux terminal as a file manager This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.