opensource.com

Subscribe to opensource.com feed
Updated: 1 hour 3 min ago

5 Git configurations I make on Linux

Thu, 09/22/2022 - 15:00
5 Git configurations I make on Linux Alan Formy-Duval Thu, 09/22/2022 - 03:00

Setting up Git on Linux is simple, but here are the five things I do to get the perfect configuration:

  1. Create global configuration
  2. Set default name
  3. Set default email address
  4. Set default branch name
  5. Set default editor

I manage my code, shell scripts, and documentation versioning using Git. This means that for each new project I start, the first step is to create a directory for its content and make it into a Git repository:

$ mkdir newproject
$ cd newproject
$ git init

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

There are certain general settings that I always want. Not many, but enough that I don't want to have to repeat the configuration each time. I like to take advantage of the global configuration capability of Git.

Git offers the git config command for manual configuration but this is a lot of work with certain caveats. For example, a common item to set is your email address. You can set it by running git config user.email followed by your email address. However, this will only take effect if you are in an existing Git directory:

$ git config user.email alan@opensource.com
fatal: not in a git directory

Plus, when this command is run within a Git repository, it only configures that specific one. The process must be repeated for new repositories. I can avoid this repetition by setting it globally. The --global option will instruct Git to write the email address to the global configuration file; ~/.gitconfig, even creating it if necessary:

  Remember, the tilde (~) character represents your home directory. In my case that is /home/alan. $ git config --global user.email alan@opensource.com
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com

The downside here is if you have a large list of preferred settings, you will have a lot of commands to enter. This is time-consuming and prone to human error. Git provides an even more efficient and convenient way to directly edit your global configuration file—that is the first item on my list!

1. Create global configuration

If you have just started using Git, you may not have this file at all. Not to worry, let's skip the searching and get started. Just use the --edit option:

$ git config --global --edit

If no file is found, Git will generate one with the following content and open it in your shell environment's default editor:

# This is Git's per-user configuration file.
[user]
# Please adapt and uncomment the following lines:
#       name = Alan
#       email = alan@hopper
~
~
~
"~/.gitconfig" 5L, 155B                                     1,1           All

Now that we have opened the editor and Git has created the global configuration file behind the scenes, we can continue with the rest of the settings.

2. Set default name

Name is the first directive in the file, so let's start with that. The command line to set mine is git config --global user.name "Alan Formy-Duval". Instead of running this command, just edit the name directive in the configuration file:

name = Alan Formy-Duval3. Set default email address

The email address is the second directive, so let's update it. By default, Git uses your system-provided name and email address. If this is incorrect or you prefer something different, you can specify it in the configuration file. In fact, if you have not configured them, Git will let you know with a friendly message the first time you commit:

Committer: Alan <alan@hopper>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate....

The command line to set mine is git config --global user.email "alan@opensource.com". Instead, edit the email directive in the configuration file and provide your preferred address:

email = alan@opensource.com

The last two settings that I like to set are the default branch name and the default editor. These directives will need to be added while you are still in the editor.

4. Set default branch name

There is currently a trend to move away from the usage of the word master as the default branch name. As a matter of fact, Git will highlight this trend with a friendly message upon initialization of a new repository:

$ git init
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:   git config --global init.defaultBranch <name>

This directive, named defaultBranch, needs to be located in a new section named init. It is now generally accepted that many coders use the word main for their default branch. This is what I like to use. Add this section followed by the directive to the configuration:

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles [init]
            defaultBranch = main5. Set default editor

The fifth setting that I like to set is the default editor. This refers to the editor that Git will present for typing your commit message each time you commit changes to your repository. Everyone has a preference whether it is nano, emacs, vi, or something else. I'm happy with vi. So, to set your editor, add a core section that includes the editor directive:

[core]
            editor = vi

That's the last one. Exit the editor. Git saves the global configuration file in your home directory. If you run the editing command again, you will see all of the content. Notice that the configuration file is a plain text file, so it can also be viewed using text tools such as the cat command. This is how mine appears:

$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
        name = Alan Formy-Duval
[core]
        editor = vi
[init]
        defaultBranch = main

This is a simple guide to quickly get started working with Git and a few of its many configuration options. There are many other articles on Git here at Opensource.com, as well as our downloadable Git cheat sheet.

This is a simple guide to quickly get started working with Git and a few of its many configuration options.

Linux Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

6 Python interpreters to try in 2022

Wed, 09/21/2022 - 15:00
6 Python interpreters to try in 2022 Stephan Avenwedde Wed, 09/21/2022 - 03:00

Python, one of the most popular programming languages, requires an interpreter to execute the instructions defined by the Python code. In contrast to other languages, which compile directly into machine code, it’s up to the interpreter to read Python code and translate its instructions for the CPU performing the related actions. There are several interpreters out there, and in this article, I’ll take a look at a few of them.

Primer to interpreters

When talking about the Python interpreter, it’s usually the /usr/bin/python binary being referred to. That lets you execute a .py file.
However, interpreting is just one task. Before a line of Python code is actually executed on the CPU, these four steps are involved:

  1. Lexing - The human-made source code is converted into a sequence of logical entities, the so called lexical tokens.
  2. Parsing - In the parser, the lexical tokens are checked in regards of syntax and grammar. The output of the parser is an abstract syntax tree (AST).
  3. Compiling - Based on the AST, the compiler creates Python bytecode. The bytecode consists of very basic, platform independent instructions.
  4. Interpreting - The interpreter takes the bytecode and performs the specified operations.

As you can see, a lot of steps are required before any real action is taken. It makes sense to take a closer look at the different interpreters.

1. CPython

CPython is the reference implementation of Python and the default on many systems. As the name suggests, CPython is written in C.
As a result, it is possible to write extensions in C and therefore make the widley used C based library code available to Python. CPython is available on a wide range of platforms including ARM, iOS, and RISC. However, as the reference implementation of the language, CPython is carefully optimized and not focused on speed.

2. Pyston

Pyston is a fork of the CPython interpreter which implements performance optimizations. The project describes itself as a replacement of the standard CPython interpreter for large, real-world applications with a speedup potential up to 30%. Due to the lack of compatible binary packages, Pyston packages must be recompiled during the download process.

3. PyPy

PyPy is a Just-in-time (JIT) compiler for Python which is written in RPython, a statically typed subset of Python. In contrast to the CPython interpreter, PyPy compiles to machine code which can be directly executed by the CPU. PyPy is the playground for Python developers where they can experiment with new features more easily.

PyPy is faster than the reference CPython implementation. Because of the nature of JIT compiler, only applications that have been running for a long time benefit from caching.  PyPy can act as a replacement for CPython. There is a drawback, though. C-extension modules are mostly supported, but they run slower than a Python one. PyPy extension modules are written in Python (not C) and so the JIT compiler is able to optimized them. As long as your application isn't dependent on incompatible modules, PyPy is a great replacement for CPython. There is a dedicated page on the project website which describes the differences to CPython in detail: Diffrences between PyPy and CPython

4. RustPython

As the name suggest, RustPython is a Python interpreter written in Rust. Although the Rust programming language is quite new, it has been gaining popularity and is a candidate to be a successor of C and C++. By default, RustPython behaves like the interpreter of CPython but it also has a JIT compiler which can be enabled optionally. Another nice feature is that the Rust toolchain allows you to directly compile to WebAssembly and also allows you to run the interpreter completely in the browser. A demo of it can be found at rustpython.github.com/demo.

5. Stackless Python

Stackless Python describes itself as an enhanced version of the Python programming language. The project is basically a fork of the CPython interpreter which adds microthreads, channels and a scheduler to the language. Microthreads allow you to structure your code into tasklets which let you run your code in parallel. This approach is comparable to using green threads of the greenlet module. Channels can be used for bidirectional communication between tasklets. A famous user of Stackless Python is the MMORPG Eve Online.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles 6. Micro Python

MicroPython is the way to go if you target micro controllers. It is a lean implementation that only requires 16kB of RAM and 256kB of space. Due to the embedded environment which it is intended for, MicroPython’s standard library is only a subset of CPython’s extensive STL. For developing and testing or as a lightweight alternative, MicroPython also runs on ordinary x86 and x64 machines. MicroPython is available for Linux, Windows, as well as many microcontrollers.

Performance

By design, Python is an inherently slow language. Depending on the task, there are significant performance differences between the interpreters. To get an overview of which interpreter is the best pick for a certain task, refer to pybenchmarks.org. An alternative to using an interpreter is to compile Python binary code directly into machine code. Nuitka, for example, is one of those projects which can compile Python code to C code and from C to machine code. The C code is then compiled to machine code using an ordinary C compiler. The topic of Python compilers is quite comprehensive and worth a separate article.

Summary

Python is a wonderful language for rapid prototyping and automating tasks. Additionally, it is easy to learn and well suited for beginners. If you usually stick with CPython, it could be interesting to see how your code behaves on another interpreter. If you use Fedora, you can easily test a few other interpreters as the package manager already provides the right binaries. Check out fedora.developer.org for more information.

It could be interesting to see how your code behaves on another interpreter than what you're used to.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Python Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My favorite open source alternatives to Notion

Wed, 09/21/2022 - 15:00
My favorite open source alternatives to Notion Amar Gandhi Wed, 09/21/2022 - 03:00

If you have notes to yourself scattered throughout your hard drive, you might need a notes application to collect and organize your personal reminders. A notes system can help you track ideas, important tasks, and works in progress. A popular application that isn't open source is Notion, but here are two options that respect your privacy and data.

Standard Notes

Standard Notes is an open source (AGPL 3.0 license) notes application featuring a password manager, a to-do list, and, of course, a great system for writing and storing notes.

One of the most important things about taking notes is finding them again, so organization is critical. Standard Notes uses an intuitive and natural tagging system to help you organize your content. You assign hashtags to each note to classify it.

Standard Notes is extensible through plug-ins. There are plug-ins for LaTeX, Markdown, code snippets, spreadsheets, and more. There's even an option to publish to a blogging platform, should you want to make some of your notes public.

Image by:

(Amir Gandhi, CC BY-SA 4.0)

Standard Notes also boasts numerous backup options, including email and cloud services. Furthermore, Standard Notes can work on any platform, including Linux, Windows, macOS, and Chrome OS.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Self-hosting Standard Notes

Standard Notes can be self-hosted. The developers provide a script that runs the application in a container, making it to run almost anywhere. If you've yet to explore containers, then you can get up to speed with Opensource.com's introduction to running applications in containers.

Another option is to use the hosted version provided by Standard Notes.

The development of Standard Notes can be followed on its Git repository.

Trilium

Trilium is a notes application that visually resembles Notion in many ways. It can handle various data types, including images, tables, to-do lists, highlighting, mind maps, flowcharts, family trees, code blocks, and more.

Trilium has several mechanisms to help you organize both your thoughts and your notes. You can view a history of recent changes, a global map of all your notes, note categories, or you can search for notes and contents.

Image by:

(Amir Gandhi, CC BY-SA 4.0)

You can install Trilium as a Flatpak from Flathub, or you can install it on your own server as a container. Alternatively, you can use Trilium's hosted instance.

Take note

There are plenty of useful note-taking applications in the open source world, and both Standard Notes and Trilium are designed with your data as the top priority. You can import and export data from these applications, so it's safe to try them out. You'll always have access to your data, so give Standard Notes or Trilium a try.

There are lots of useful open source note-taking tools out there. Standard Notes and Trilium are designed with your data as the top priority.

Alternatives What to read next 5 note-taking apps for Linux This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 ways to use the Linux inxi command

Tue, 09/20/2022 - 15:00
3 ways to use the Linux inxi command Don Watkins Tue, 09/20/2022 - 03:00

I was looking for information about the health of my laptop battery when I stumbled upon inxi. It's a command line system information tool that provides a wealth of information about your Linux computer, whether it's a laptop, desktop, or server.

The inxi command is licensed with the GPLv3, and many Linux distributions include it. According to its Git repository: "inxi strives to support the widest range of operating systems and hardware, from the most simple consumer desktops, to the most advanced professional hardware and servers."

Documentation is robust, and the project maintains a complete man page online. Once installed, you can access the man page on your system with the man inxi command.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Install inxi on Linux

Generally, you can install inxi from your distribution's software repository or app center. For example, on Fedora, CentOS, Mageia, or similar:

$ sudo dnf install inxi

On Debian, Elementary, Linux Mint, or similar:

$ sudo apt install inxi

You can find more information about installation options for your Linux distribution here.

3 ways to use inxi on Linux

Once you install inxi, you can explore all its options. There are numerous options to help you learn more about your system. The most fundamental command provides a basic overview of your system:

$ inxi -b
System:
  Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
        Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS
Machine:
  Type: Laptop System: HP product: Dev One Notebook PC v: N/A
        serial: <superuser required>
  Mobo: HP model: 8A78 v: KBC Version 01.03 serial: <superuser required>
        UEFI: Insyde v: F.05 date: 06/14/2022
Battery:
  ID-1: BATT charge: 50.6 Wh (96.9%) condition: 52.2/53.2 Wh (98.0%)
CPU:
  Info: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics [MT MCP]
        speed (MHz): avg: 915 min/max: 400/4507
Graphics:
  Device-1: AMD Cezanne driver: amdgpu v: kernel
  Device-2: Quanta HP HD Camera type: USB driver: uvcvideo
  Display: x11 server: X.Org v: 1.21.1.3 driver: X: loaded: amdgpu,ati
        unloaded: fbdev,modesetting,radeon,vesa gpu: amdgpu
        resolution: 1920x1080~60Hz
  OpenGL:
        renderer: AMD RENOIR (LLVM 13.0.1 DRM 3.47 5.19.0-76051900-generic)
        v: 4.6 Mesa 22.0.5
Network:
  Device-1: Realtek RTL8822CE 802.11ac PCIe Wireless Network Adapter
        driver: rtw_8822ce
Drives:
  Local Storage: total: 953.87 GiB used: 75.44 GiB (7.9%)
Info:
  Processes: 347 Uptime: 15m Memory: 14.96 GiB used: 2.91 GiB (19.4%)
  Shell: Bash inxi: 3.3.131. Display battery status

You can check your battery health using the -B option. The result shows the system battery ID, charge condition, and other information:

$ inxi -B
Battery:
ID-1: BATT charge: 44.3 Wh (85.2%) condition: 52.0/53.2 Wh (97.7%)2. Display CPU info

Find out more information about the CPU with the -C option:

$ inxi -C CPU: Info: 8-core model: AMD Ryzen 7 PRO 5850U with Radeon Graphics bits: 64 type: MT MCP cache: L2: 4 MiB Speed (MHz): avg: 400 min/max: 400/4507 cores: 1: 400 2: 400 3: 400 4: 400 5: 400 6: 400 7: 400 8: 400 9: 400 10: 400 11: 400 12: 400 13: 400 14: 400 15: 400 16: 400

The output of inxi uses colored text by default. You can change that to improve readability, as needed, by using the "color switch."

The command option is -c followed by any number between 0 and 42 to suit your tastes.

$ inxi -c 42

Here is an example of a couple of different options using color 5 and then 7:

Image by:

(Don Watkins, CC BY-SA 4.0)

The software can show hardware temperature, fan speed, and other information about your system using the sensors in your Linux system. Enter inxi -s and read the result below:

Image by:

(Don Watkins, CC BY-SA 4.0)

3. Combine options

You can combine options for inxi to get complex output when supported. For example, inxi -S provides system information, and -v provides verbose output. Combining the two gives the following:

$ inxi -S
System:
  Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
        Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS

$ inxi -Sv
CPU: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics (-MT MCP-)
speed/min/max: 634/400/4507 MHz Kernel: 5.19.0-76051900-generic x86_64
Up: 20m Mem: 3084.2/15318.5 MiB (20.1%) Storage: 953.87 GiB (7.9% used)
Procs: 346 Shell: Bash inxi: 3.3.13Bonus: Check the weather

Your computer isn't all inxi can gather information about. With the -w option, you can also get weather information for your locale:

$ inxi -w
Weather:
  Report: temperature: 14 C (57 F) conditions: Clear sky
  Locale: Wellington, G2, NZL
        current time: Tue 30 Aug 2022 16:28:14 (Pacific/Auckland)
        Source: WeatherBit.io

You can get weather information for other areas of the world by specifying the city and country you want along with -W:

$ inxi -W rome,italy
Weather:
  Report: temperature: 20 C (68 F) conditions: Clear sky
  Locale: Rome, Italy current time: Tue 30 Aug 2022 06:29:52
        Source: WeatherBit.ioWrap up

There are many great tools to gather information about your computer. I use different ones depending on the machine, the desktop, or my mood. What are your favorite system information tools?

I use inxi on Linux to check my laptop batter, CPU information, and even the weather.

Linux Command line Hardware What to read next Linux commands to display your hardware information This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Security buzzwords to avoid and what to say instead

Tue, 09/20/2022 - 15:00
Security buzzwords to avoid and what to say instead Seth Kenlon Tue, 09/20/2022 - 03:00

Technology is a little famous for coming up with "buzzwords." Other industries do it, too, of course. "Story-driven" and "rules light" tabletop games are a big thing right now, "deconstructed" burgers and burritos are a big deal in fine dining. The problem with buzzwords in tech, though, is that they potentially actually affect your life. When somebody calls an application "secure," to influence you to use their product, there's an implicit promise being made. "Secure" must mean that something's secure. It's safe for you to use and trust. The problem is, the word "secure" can actually refer to any number of things, and the tech industry often uses it as such a general term that it becomes meaningless.

Because "secure" can mean both so much and so little, it's important to use the word "secure" carefully. In fact, it's often best not to use the word at all, and instead, just say what you actually mean.

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles When "secure" means encrypted

Sometimes "secure" is imprecise shorthand for encrypted. In this context, "secure" refers to some degree of difficulty for outside observers to eavesdrop on your data.

Don't say this: "This website is resilient and secure."

That sounds pretty good! You're probably imagining a website that has several options for 2-factor authentication, zero-knowledge data storage, and steadfast anonymity policies.

Say this instead: "This website has a 99% uptime guarantee, and its traffic is encrypted and verifiable with SSL."

Not only is the intent of the promise clear now, it also explains how "secure" is achieved (it uses SSL) and what the scope of "secure" is.

Note that there's explicitly no promise here about privacy or anonymity.

When "secure" means restricted access

Sometimes the term "secure" refers to application or device access. Without clarification, "secure" could mean anything from the useless security by obscurity model, to a simple htaccess password, to biometric scanners.

Don't say this: "We've secured the system for your protection."

Say this instead: "Our system uses 2-factor authentication."

When "secure" means data storage

The term "secure" can also refer to the way your data is stored on a server or a device.

Don't say this: "This device stores your data with security in mind."

Say this instead: "This device uses full disk encryption to protect your data."

When remote storage is involved, "secure" may instead refer to who has access to stored data.

Don't say this: "Your data is secure."

Say this instead: "Your data is encrypted using PGP, and only you have the private key."

When "secure" means privacy

These days, the term "privacy" is almost as broad and imprecise as "security." On one hand, you might think that "secure" must mean "private," but that's true only when "secure" has been defined. Is something private because it has a password barrier to entry? Or is something private because it's been encrypted and only you have the keys? Or is it private because the vendor storing your data knows nothing about you (aside from an IP address?) It's not enough to declare "privacy" any more than it is to declare "security" without qualification.

Don't say this: "Your data is secure with us."

Say this instead: "Your data is encrypted with PGP, and only you have the private key. We require no personal data from you, and can only identify you by your IP address."

Some sites make claims about how long IP addresses are retained in logs, and promises about never surrendering data to authorities without warrants, and so on. Those are beyond the scope of technological "security," and have everything to do with trust, so don't confuse them for technical specifications.

Say what you mean

Technology is a complex topic with a lot of potential for confusion. Communication is important, and while shorthand and jargon can be useful in some settings, generally it's better to be precise. When you're proud of the "security" of your project, don't generalize it with a broad term. Make it clear to others what you're doing to protect your users, and make it equally clear what you consider out of scope, and communicate these things often. "Security" is a great feature, but it's a broad one, so don't be afraid to brag about the specifics.

Consider these thoughtful approaches to define what security really means in your open source project.

Image by:

JanBaby, via Pixabay CC0.

Security and privacy What to read next Why transparency is critical to your open source project's security Encrypting and decrypting files with OpenSSL This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

I got my first pull request merged!

Mon, 09/19/2022 - 15:00
I got my first pull request merged! Oluwaseun Mon, 09/19/2022 - 03:00

Words cannot express my joy when I got the notification about the merge below, and I owe it to my current engineering school, AltSchool Africa.

Image by:

(Awosise Oluwaseun, CC BY-SA 4.0)

Before this, I had been introduced to open source many times, told about its importance in the tech space, and even attended open source conferences (e.g., OSCAFest). I had all the instant excitement to start, but imposter syndrome set in on opening GitHub to create something.

Fast forward to Monday, the 8th of August, 2022, when I watched Bolaji's video on contributing to open source. I felt pumped again, but I wanted to apply what I learned, so I noted some steps.

The steps:

  1. I made up my mind I was going to contribute to a project.
  2. I was focused on a site (good first issue) to pick my first project from, which I filtered to suit my skill level. I kept opening the next page till I found one.
  3. I made sure I was equipped with the required Git and GitHub knowledge to complete the project.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources The project

After long hours searching for projects, I finally found one titled, Ensure no missing alt attributes. I was to give descriptive alt values to images from the site. Alt values in images help to improve the accessibility of the site such that screen readers can provide a detailed description of the image to, say, a visually impaired person. Easy right? Yes, but if I didn't make up my mind to get the first contribution, I wouldn't find it, and open source would continue to be a myth to me.

I was still pumped until I discovered it was from MDN. Wait, MDN? As in Mozilla developer? Will they merge my contribution even with how seemingly easy it looks? Imposter syndrome set in.

Upon checking the issue, I saw that people were already contributing. I summoned my courage and started reading about it. Taking my time to read and understand the project and how I needed to approach the issue was another challenge I had to overcome.

The project is as easy as you try to understand it.

So, I picked two images to begin with. I gave alt values to them, committed my changes, then made a pull request. The time between when I made the pull request and when I got the approval mail was full of self-doubts. Should I close the pull request? This is MDN. Well, it's not coding... What if I don't get merged? I might never contribute again. All it took to clear all of the doubts were the emails I got from my reviewer below:

Image by:

(Awosise Oluwaseun, CC BY-SA 4.0)

Image by:

(Awosise Oluwaseun, CC BY-SA 4.0)

Image by:

(Awosise Oluwaseun, CC BY-SA 4.0)

I was indeed delighted, and this inspired me to check for more. It gave me the courage I needed to request additional issues to solve.

Image by:

(Awosise Oluwaseun, CC BY-SA 4.0)

Summary

A few lessons I'd love you to take home from this article are:

  • Open source is for all. Do you see that typo on that site you just visited? You helping to correct it is a way of contributing.
  • No skillset is too small. A basic understanding of HTML was what I needed to contribute.
  • Only you can stop yourself from contributing.
  • The first contribution is all you need to get the ball rolling.

I hope you have been able to pick something from my story and apply it today. This is another space I'd like to keep contributing to, so see you in my next article, and happy open sourcing!

This article originally appeared on I got my first Pull Request merged! and is republished with permission.

Experience the joy that contributing to open source brings.

Image by:

Photo by Rob Tiller, CC BY-SA 4.0

Community management What to read next My first contribution to open source: Making a decision New open source tool catalogs African language resources This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

PyLint: The good, the bad, and the ugly

Mon, 09/19/2022 - 15:00
PyLint: The good, the bad, and the ugly Moshe Zadka Mon, 09/19/2022 - 03:00

Hot take: PyLint is actually good!

"PyLint can save your life" is an exaggeration, but not as much as you might think! PyLint can keep you from really really hard to find and complicated bugs. At worst, it can save you the time of a test run. At best, it can help you avoid complicated production mistakes.

The good

I'm embarrassed to say how common this can be. Naming tests is perpetually weird: Nothing cares about the name, and there's often not a natural name to be found. For instance, look at this code:

def test_add_small():
    # Math, am I right?
    assert 1 + 1 == 3
   
def test_add_large():
    assert 5 + 6 == 11
   
def test_add_small():
    assert 1 + 10 == 11

The test works:

collected 2 items                                                                        
test.py ..
2 passed

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles But here's the kicker: If you override a name, the testing infrastructure happily skips over the test!

In reality, these files can be hundreds of lines long, and the person adding the new test might not be aware of all the names. Unless someone is looking at test output carefully, everything looks fine.

Worst of all, the addition of the overriding test, the breakage of the overridden test, and the problem that results in prod might be separated by days, months, or even years.

PyLint finds it

But like a good friend, PyLint is there for you.

test.py:8:0: E0102: function already defined line 1
     (function-redefined)The bad

Like a 90s sitcom, the more you get into PyLint, the more it becomes problematic. This is completely reasonable code for an inventory modeling program:

"""Inventory abstractions"""

import attrs

@attrs.define
class Laptop:
    """A laptop"""
    ident: str
    cpu: str

It seems that PyLint has opinions (probably formed in the 90s) and is not afraid to state them as facts:

$ pylint laptop.py | sed -n '/^laptop/s/[^ ]*: //p'
R0903: Too few public methods (0/2) (too-few-public-methods)The ugly

Ever wanted to add your own unvetted opinion to a tool used by millions? PyLint has 12 million monthly downloads.

"People will just disable the whole check if it's too picky." —PyLint issue 6987, July 3rd, 2022

The attitude it takes towards adding a test with potentially many false positives is..."eh."

Making it work for you

PyLint is fine, but you need to interact with it carefully. Here are the three things I recommend to make PyLint work for you.

1. Pin it

Pin the PyLint version you use to avoid any surprises!

In your .toml file:

[project.optional-dependencies]
pylint = ["pylint"]

In your code:

from unittest import mock

This corresponds with code like this:

# noxfile.py
...
@nox.session(python=VERSIONS[-1])
def refresh_deps(session):
    """Refresh the requirements-*.txt files"""
    session.install("pip-tools")
    for deps in [..., "pylint"]:
        session.run(
            "pip-compile",
            "--extra",
            deps,
            "pyproject.toml",
            "--output-file",
            f"requirements-{deps}.txt",
        )2. Default deny

Disable all checks. Then enable ones that you think have a high value-to-false-positive ratio. (Not just false-negative-to-false-positive ratio!)

# noxfile.py
...
@nox.session(python="3.10")
def lint(session):
    files = ["src/", "noxfile.py"]
    session.install("-r", "requirements-pylint.txt")
    session.install("-e", ".")
    session.run(
        "pylint",
        "--disable=all",
        *(f"--enable={checker}" for checker in checkers)
        "src",
    )3. Checkers

These are some of the ones I like. Enforce consistency in the project, avoid some obvious mistakes.

checkers = [
    "missing-class-docstring",
    "missing-function-docstring",
    "missing-module-docstring",
    "function-redefined",
]Using PyLint

You can take just the good parts of PyLint. Run it in CI to keep consistency, and use the highest value checkers.

Lose the bad parts: Default deny checkers.

Avoid the ugly parts: Pin the version to avoid surprises.

Get the most out of PyLint.

Image by:

Opensource.com

Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What is OpenRAN?

Sat, 09/17/2022 - 15:00
What is OpenRAN? Stephan Avenwedde Sat, 09/17/2022 - 03:00

If you own and use a smartphone capable of connecting to arbitrary computers all over the world, then you are a user of Radio Access Networks (RAN). A RAN is provided by your cellular provider, and it handles wireless connections between your smartphone and your communication provider.

While your smartphone may be running an open source operating system (Android) and the server you try to access is probably running Linux, there's a lot of proprietary technology in between to make the connection happen. While you may have a basic understanding of how networking works locally, this knowledge stops when you plug a SIM card into your smartphone in order to make a connection with a cell tower possible. In fact, the majority of software and hardware components in and around a cell tower are still closed source, which of course has some drawbacks. This is where OpenRAN comes into play.

The OpenRAN initiative (shorthand for Open Radio Access Network) was started by the O-Ran Alliance, a worldwide community of mobile operators, vendors, and research and academic institutions. The initiative aims to define open standards between the various components of radio access networks. Interoperability between components of different manufacturers was not possible. Until now.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge The latest on edge Radio Access Network

But what exactly is a RAN? In a nutshell, a RAN establishes a wireless connection to devices (smartphones, for example) and connects them to the core network of the communication company. In the context of a RAN, devices are denoted as User Equipment (UE).

The tasks of a RAN can be summarized as follows:

  • Authentication of UE
  • The handover of UE to another RAN (if the UE is moving)
  • Forwarding the data between the UE and the core network
  • Provision of the data for accounting functions (billing of services or the transmitted data)
  • Control of access to the various services
OpenRAN

RAN usually consists of proprietary components. OpenRAN defines functional units and interfaces between them:

  • Radio Unit (RU): The RU is connected to the antenna and sends, receives, amplifies, and digitizes radio signals.
  • Distributed Unit (DU): Handles the PHY, MAC and RLC layer.
  • Centralised Unit (CU): Handles the RRC and PDCP layer.
  • RAN Intelligent Controller (RIC): Control and optimization of RAN elements and resources.

Units are connected to each other by standardized, open interfaces. Furthermore, if the units can be virtualized and deployed in the cloud or on an edge device, then it's called a vRAN (virtual Radio Access Network). The basic principle of vRAN is to decouple the hardware from the software by using a software-based virtualization layer. Using a vRAN improves flexibility in terms of scalability and the underlying hardware.

OpenRAN for everyone

By the definition of functional units and the interfaces between them, OpenRAN enables interoperability of components from different manufacturers. This reduces the dependency for cellular providers of specific vendors and makes communication infrastructure more flexible and resilient. As a side-effect, using clearly defined functional units and interfaces drives innovation and competition. With vRAN, the use of standard hardware is possible. With all these advantages, OpenRAN is a prime example of how open source benefits everyone.

Open Radio Access Network defines open standards between the various components of radio access networks.

Image by:

Opensource.com

Edge computing What to read next Streaming internet radio with RadioDroid PyRadio: An open source alternative for internet radio This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Fix the apt-key deprecation error in Linux

Fri, 09/16/2022 - 15:00
Fix the apt-key deprecation error in Linux Chris Hermansen Fri, 09/16/2022 - 03:00

This morning, after returning home from a mini vacation, I decided to run apt update and apt upgrade from the command line just to see whether there had been any updates while I was offline. After issuing the update command, something didn't seem quite right; I was seeing messages along the lines of:

W: https://updates.example.com/desktop/apt/dists/xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.

True, it's just a warning, but still there's that scary word, deprecation, which usually means it's going away soon. So I thought I should take a look. Based on what I found, I thought my experience would be worth sharing.

It turns out that I have older configurations for some repositories, artifacts of installation processes from "back in the day," that needed adjustment. Taking my prompt from the warning message, I ran man apt-key at the command line, which provided several interesting bits of information. Near the beginning of the man page:

apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys are considered trusted.
Use of apt-key is deprecated, except for the use of apt-key del in maintainer scripts to remove existing keys from the main keyring. If such usage of apt-key is desired, the additional installation of the GNU Privacy Guard suite (packaged in gnupg) is required.
apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.

Last available in "Debian 11 and Ubuntu 22.04" is pretty much right now for me. Time to fix this!

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Fixing the apt-key deprecation error

Further on in the man page, there's the deprecation section mentioned in the warning from apt update:

DEPRECATION
Except for using apt-key del in maintainer scripts, the use of apt-key is deprecated. This section shows how to replace the existing use of apt-key.
If your existing use of apt-key add looks like this:

wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add -

Then you can directly replace this with (though note the recommendation below):

wget -qO- https://myrepo.example/myrepo.asc | sudo tee /etc/apt/trusted.gpg.d/myrepo.asc

Make sure to use the "asc" extension for ASCII armored keys and the "gpg" extension for the binary OpenPGP format (also known as "GPG key public ring"). The binary OpenPGP format works for all apt versions, while the ASCII armored format works for apt version >= 1.4.

Recommended: Instead of placing keys into the /etc/apt/trusted.gpg.d directory, you can place them anywhere on your filesystem by using the Signed-By option in your sources.list and pointing to the filename of the key. See sources.list(5) for details. Since APT 2.4, /etc/apt/keyrings is provided as the recommended location for keys not managed by packages. When using a deb822-style sources.list, and with apt version >= 2.4, the Signed-By option can also be used to include the full ASCII armored keyring directly in the sources.list without an additional file.

If you, like me, have keys from non-repository stuff added with apt-key, then here are the steps to transition:

  1. Determine which keys are in apt-key keyring /etc/apt/trusted.gpg
  2. Remove them
  3. Find and install replacements in /etc/apt/trusted.gpg.d/ or in /etc/apt/keyrings/
1. Finding old keys

The command apt-key list shows the keys in /etc/apt/trusted.gpg:

$ sudo apt-key list
[sudo] password:
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2017-04-05 [SC]
      DBE4 6B52 81D0 C816 F630  E889 D980 A174 57F6 FB86
uid           [ unknown] Example <support@example.com>
sub   rsa4096 2017-04-05 [E]

pub   rsa4096 2016-04-12 [SC]
      EB4C 1BFD 4F04 2F6D DDCC  EC91 7721 F63B D38B 4796
uid           [ unknown] Google Inc. (Linux Packages Signing Authority) <linux-packages-keymaster@google.com>
sub   rsa4096 2021-10-26 [S] [expires: 2024-10-25]
[...]

Also shown afterward are the keys held in files in the /etc/apt/trusted.gpg.d folder.

[ Related read How to import your existing SSH keys into your GPG key ]

2. Removing old keys

The group of quartets of hex digits, for example DBEA 6B52...FB86, is the identifier required to delete the unwanted keys:

$ sudo apt-key del "DBEA 6B52 81D0 C816 F630  E889 D980 A174 57F6 FB86"

This gets rid of the Example key. That's literally just an example, and in reality you'd get rid of keys that actually exist. For instance, I ran the same command for each of the real keys on my system, including keys for Google, Signal, and Ascensio. Keys on your system will vary, depending on what you have installed.

3. Adding keys

Getting the replacement keys is dependent on the application. For example, Open Whisper offers its key and an explanation of what to do to install it, which I decided not to follow as it puts the key in /usr/share/keyrings. Instead, I did this:

$ wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg
$ sudo mv signal-desktop-keyring.gpg /etc/apt/trusted.gpg.d/
$ sudo chown root:root /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg
$ sudo chmod ugo+r /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg
$ sudo chmod go-w /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg

Ascencio also offers instructions for installing OnlyOffice that include dealing with the GPG key. Again I modified their instructions to suit my needs:

$ gpg --no-default-keyring --keyring gnupg-ring:~/onlyoffice.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys CB2DE8E5
$ sudo mv onlyoffice.gpg /etc/apt/trusted.gpg.d/
$ sudo chown root:root /etc/apt/trusted.gpg.d/onlyoffice.gpg
$ sudo chmod ugo+r /etc/apt/trusted.gpg.d/onlyoffice.gpg
$ sudo chmod go-w /etc/apt/trusted.gpg.d/onlyoffice.gpg

As for the Google key, it is managed (correctly, it appears) through the .deb package, and so a simple reinstall with dpkg -i was all that was needed. Finally, I ended up with this:

$ ls -l /etc/apt/trusted.gpg.d
total 24
-rw-r--r-- 1 root root 7821 Sep  2 10:55 google-chrome.gpg
-rw-r--r-- 1 root root 2279 Sep  2 08:27 onlyoffice.gpg
-rw-r--r-- 1 root root 2223 Sep  2 08:02 signal-desktop-keyring.gpg
-rw-r--r-- 1 root root 2794 Mar 26  2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1733 Mar 26  2021 ubuntu-keyring-2018-archive.gpgExpired keys

The last problem key I had was from an outdated installation of QGIS. The key had expired, and I'd set it up to be managed by apt-key. I ended up following their instructions to the letter, both for installing a new key in /etc/apt/keryings and their suggested format for the /etc/apt/sources.list.d/qgis.sources installation configuration.

[ Download the Linux cheat sheets for apt or dnf ]

Linux system maintenance

Now you can run apt update with no warnings or errors related to deprecated key configurations. We apt users just need to remember to adjust any old installation instructions that depend on apt-key. Instead of using apt-key, you must instead install a key to /etc/apt/trusted.gpg.d/ or /etc/apt/keyrings/, using gpg as needed.

Follow these steps and you can run apt update with no warnings or errors related to deprecated key configurations.

Image by:

Opensource.com

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

24 of our favorite articles in a downloadable eBook

Thu, 09/15/2022 - 15:00
24 of our favorite articles in a downloadable eBook Lauren Pritchett Thu, 09/15/2022 - 03:00

One day in March of 2020, a few of my Opensource.com teammates and I grabbed lunch to talk about how we would work from home for the next couple of weeks until this pandemic got nipped in the bud. At the end of the work day, I packed up my laptop and walked out the door of our office building. Several months later, we were all still working from home. None of us had returned to our office-based workstations. In July 2020, using safety precautions, we were granted access to our workstation solely to retrieve personal items. My desk was left exactly as I left it that afternoon in March. Expiring snacks stashed in my secret drawer. Picture frames collecting dust. Comic strips pinned up.

And a pile of several bound copies of Best of a decade on Opensource.com 2010-2019. Our last yearbook that was published (and printed)!

Like most folks, the Opensource.com community had to pivot how we operated in order to stay connected. Sure, we continued our weekly video calls. But in-person conferences, a unique time where people would travel from all over the world to be together, were out of the question. Though some of this operational stuff has changed, the connection with one another has strengthened. It is due time to publish a new yearbook to honor that connection. This yearbook was created to celebrate our correspondents.

The Opensource.com Correspondent Program recognizes the critical group of our most trusted and committed contributors. We recently closed out yet another successful program year with 24 correspondents. Each correspondent selected their favorite article to be included in this downloadable yearbook. In it, you'll find Raspberry Pi tutorials, career stories, home automation tips, Linux tricks, and much more.

Image by:

Opensource.com, CC-BY-4.0

The above screenshot is from a recent video call with a few of our correspondents. This community reaches far in locale (North Carolina, Minnesota, California, Germany, India, and New Zealand to name a few) and wide in experience (educators, hobbyists, sysadmins, podcasters, and more).

I'm looking forward to spending another year with the Opensource.com community, whether that is through video calls, in-person events, an Internet DM, or simply reading your article. How would you like to participate?

Click here to download the Opensource.com Correspondent Yearbook 2022 ]

The Opensource.com Correspondent Yearbook 2021-2022 features articles about Raspberry Pi, Linux, open source careers, programming, and more.

Image by:

Monsterkoi. Modified by Opensource.com. CC BY-SA 4.0

Opensource.com community What to read next New year, new Opensource.com community manager This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I switched from Docker Desktop to Colima

Thu, 09/15/2022 - 15:00
How I switched from Docker Desktop to Colima Michael Anello Thu, 09/15/2022 - 03:00

DDEV is an open source tool that makes it simple to get local PHP development environments up and running within minutes. It’s powerful and flexible as a result of its per-project environment configurations, which can be extended, version controlled, and shared. In short, DDEV aims to allow development teams to use containers in their workflow without the complexities of bespoke configuration.

DDEV replaces more traditional AMP stack solutions (WAMP, MAMP, XAMPP, and so on) with a flexible, modern, container-based solution. Because it uses containers, DDEV allows each project to use any set of applications, versions of web servers, database servers, search index servers, and other types of software.

In March 2022, the DDEV team announced support for Colima, an open source Docker Desktop replacement for macOS and Linux. Colima is open source, and by all reports it’s got performance gains over its alternative, so using Colima seems like a no-brainer.

Migrating to Colima

First off, Colima is almost a drop-in replacement for Docker Desktop. I say almost because some reconfiguration is required when using it for an existing DDEV project. Specifically, databases must be reimported. The fix is to first export your database, then start Colima, then import it. Easy.

Colima requires that either the Docker or Podman command is installed. On Linux, it also requires Lima.

Docker is installed by default with Docker Desktop for macOS, but it’s also available as a stand-alone command. If you want to go 100% pure Colima, you can uninstall Docker Desktop for macOS, and install and configure the Docker client independently. Full installation instructions can be found on the DDEV docs site.

Image by:

(Mike Anello,CC BY-SA 4.0)

If you choose to keep using both Colima and Docker Desktop, then when issuing docker commands from the command line, you must first specify which container you want to work with. More on this in the next section.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Install Colima on macOS

I currently have some local projects using Docker, and some using Colima. Once I understood the basics, it’s not too difficult to switch between them.

  1. To get started, install Colima using Homebrew brew install colima

  2. ddev poweroff (just to be safe)

  3. Next, start Colima with colima start --cpu 4 --memory 4. The --cpu and --memory options only have to be done once. After the first time, only colima start is necessary.

  4. If you’re a DDEV user like me, then you can spin up a fresh Drupal 9 site with the usual ddev commands (ddev config, ddev start, and so on.) It’s recommended to enable DDEV’s mutagen functionality to maximize performance.

Switching between a Colima and Docker Desktop

If you’re not ready to switch to Colima wholesale yet, it’s possible to have both Colima and Docker Desktop installed.

  1. First, poweroff ddev:ddev poweroff

  2. Then stop Colima: colima stop

  3. Now run docker context use default to tell the Docker client which container you want to work with. The name default refers to Docker Desktop for Mac. When colima start is run, it automatically switches Docker to the colima context.

  4. To continue with the default (Docker Desktop) context, use the ddev start command.

Technically, starting and stopping Colima isn’t necessary, but the ddev poweroff command when switching between two contexts is.

Recent versions of Colima revert the Docker context back to default when Colima is stopped, so the docker context use default command is no longer necessary. Regardless, I still use docker context show to verify that either the default (Docker Desktop for Mac) or colima context is in use. Basically, the term context refers to which container provider the Docker client routes commands to.

Try Colima

Overall, I’m liking what I see so far. I haven’t run into any issues, and Colima-based sites seem a bit snappier (especially when DDEV’s Mutagen functionality is enabled). I definitely foresee myself migrating project sites to Colima over the next few weeks.

This article originally appeared on the DrupalEasy blog and is republished with permission.

Colima is a Docker Desktop alternative for macOS and Linux that's now supported by DDEV.

Image by:

freephotocc via Pixabay CC0

Kubernetes Mac Linux Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 steps to protect your home network

Wed, 09/14/2022 - 15:00
3 steps to protect your home network Seth Kenlon Wed, 09/14/2022 - 03:00

The typical setup for Internet connectivity today is for your home to have a router, usually a little physical box located somewhere in your house, that acts as a gateway to the rest of the world. The router creates a local network, and you connect your devices to it, including your computer, mobile, TV, game console, and anything else that needs to connect to the Internet or to each other. It's deceptively easy to think of this setup as there being two "sides" of your router: On one side there's the Internet, and on the other, your devices. That's an awful colloquial, though, because in reality there's an entire worldwide network of computers on one side of your router, and your digital life on the other. When you use the Internet directly, you're logging onto a shared area of somebody else's computer. When you're not using the Internet, it doesn't go away, and there are lots of scripts and programs out there designed to visit millions upon millions of routers in an attempt to find open ports or services. With the Internet of Things (IoT) commonplace, there are sometimes more services running on your home network than you realize. Here are three steps you can take to audit and protect your home network from unwanted traffic.

[ Related read Run your network with open source software ]

1. Think about protocol

Part of your router's job is to keep the Internet separate from your home network. But when you access the Internet, you invite some portion of the Internet into your home. You're making an exception to the general rule that the Internet should stay off your network.

On many websites, what's allowed through your router is just text. When you visit your favorite blog site to read up on the latest tech news, for instance, you're downloading a page or two of text. You read the text, and then move on. That's a simple one-to-one transaction.

However, the HTTPS protocol is robust and the applications running on the Internet are full of variety. When you visit Opensource.com, for instance, you're not just downloading text. You get graphics, and maybe a cheat sheet or ebook. You're also downloading cookies in the background, which helps site administrators understand who visits the site, which has led to improved mobile support, a new design for greater accessibility, and content that readers enjoy. You may not think about cookies or traffic analysis as something you interact with when you're on the Internet, but it's something that gets "snuck" into page interactions because the HTTPS protocol is designed to be broad and, in many ways, high trust. When you visit a website over HTTPS (that is, in a web browser), you're implicitly agreeing to automatic downloads of files that you're probably not conscious of, but that you trust are useful and unobtrusive. For a model of file sharing designed for less trust, you might try the Gemini or Gopher space.

You make a similar agreement when you join a video conference. Not only are you downloading text on the page, cookies for traffic monitoring, but also a video and audio feed.

Some sites are designed for even more. There are sites designed to allow people to share their computer screen, and sometimes even the control of their computer. In the best case scenario, this helps a remote technician repair a problem on someone's computer, but in practice users can be tricked into visiting sites only to have financial credentials and personal data stolen.

You'd rightfully be suspicious if a website offering text articles required you to grant it permission to look through your webcam while you read. You should cultivate the same level of suspicion when an appliance requires Internet access. When you connect a device to your network, it's important to consider what implicit agreement you're making. A device designed to control lighting in your house shouldn't require Internet access to function, but many do, and many don't make it clear what permissions you're granting that device. Many IoT devices want access to the Internet so that you can access the device over the Internet while you're away from home. That's part of the appeal of a "smart home". However, it's impossible to know what code many of these devices run. When possible, use open source and trusted software, such as Home Assistant to interface with your living space.

[ Also read How to choose a wireless protocol for home automation ]

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools The latest on edge 2. Create a guest network

Many modern routers make it trivial to create a second network (usually called a "guest network" in the configuration panels) for your home. You probably don't feel like you need a second network, but actually a guest network can be useful to have around. Its eponymous and most obvious use case is that a guest network provides people visiting your house access to the Internet without you telling them your network password. In the foyer of my house, I have a sign identifying the guest network name and password. Anyone who visits can join that network for access to the Internet.

The other use case is for IoT, edge devices, and my home lab. When I purchased "programmable" Christmas lights last year, I was surprised to find that in order to connect to the lights, they had to be connected to the Internet. OF course, the $50 lights from a nameless factory didn't come with source code included, or any way to interface or inspect with the firmware embedded in the power brick, and so I wasn't confident in what I was agreeing to by connecting them to the Internet. They've been permanently relegated to my guest network.

Every router vendor is different, so there's no single instruction on how to create a "sandboxed" guest network on yours. Generally, you access your home router through a web browser. Your router's address is sometimes printed on the bottom of the router, and it begins with either 192.168 or 10.

Navigate to your router's address and log in with the credentials you were provided when you got your Internet service. It's often as simple as admin with a numeric password (sometimes, this password is printed on the router, too). If you don't know the login, call your Internet provider and ask for details.

In the graphical interface, find the panel for "Guest network." This option is in the Advanced configuration of my router, but it could be somewhere else on yours, and it may not even be called "Guest network" (or it may not even be an option.)

Image by:

(Opensource.com, CC BY-SA 4.0)

It may take a lot of clicking around and reading. If you find that you have the option, then you can set up a guest network for visitors, including people walking through your front door and applications running on a lightbulb.

3. Firewall your firewall

Your router probably has a firewall running by default. A firewall keeps unwanted traffic off your network, usually by limiting incoming packets to HTTP and HTTPS (web browser traffic) and a few other utility protocols, and by rejecting traffic you didn't initiate. You can verify that a firewall is running by logging onto your router and looking for "Firewall" or "Security" settings.

However, many devices can run firewalls of there own. This is important because a network is a network because devices connect to one another. Placing firewalls "between" devices is like locking a door to a room inside your house. Guests may roam the halls, but without the right key they're not invited into your office.

On Linux, you can configure your firewall using firewalld interface and the firewall-cmd command. On other operating systems, the firewall is sometimes in a control panel labeled as "security" or "sharing" (and sometimes both.) Most default firewall settings allow only outgoing traffic (that's the traffic you initiate by, for instance, opening a browser and navigating to a website) and incoming traffic that's responding to your requests (that's the web data responding to your navigation). Incoming traffic that you didn't initiate is blocked.

You can customize this setup as needed, should you want to allow specific traffic, such as an SSH connection, a VNC connection, or a game server host.

Monitor your network

These techniques help build up your awareness of what's happening around you. The next step is to monitor your network. You can start simple, for instance by running Fail2ban on a test server on your guest network. Take a look at logs, if your router provides them. You don't have to know everything about TCP/IP and packets and other advanced subjects to see that the Internet is a busy and noisy place, and seeing that for yourself is great inspiration to take precautions when you set up a new device, whether it's IoT, mobile, a desktop or laptop, a game console, or a Raspberry Pi, in your home.

Who has access to your home network? With the Internet of Things (IoT) commonplace, there are sometimes more services running on your home network than you realize. Protect it from unwanted traffic.

Image by:

Opensource.com

Edge computing Networking Internet of Things (IoT) Security and privacy What to read next A beginner's guide to network management Control your home automation remotely with Raspberry Pi and Traefik Hub This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Packaging Job scripts in Kubernetes operators

Wed, 09/14/2022 - 15:00
Packaging Job scripts in Kubernetes operators Bobby Gryzynger Wed, 09/14/2022 - 03:00

When using a complex Kubernetes operator, you often have to orchestrate Jobs to perform workload tasks. Examples of Job implementations typically provide trivial scripts written directly in the manifest. In any reasonably-complex application, however, determining how to handle more-than-trivial scripts can be challenging.

In the past, I've tackled this problem by including my scripts in an application image. This approach works well enough, but it does have a drawback. Anytime changes are required, I'm forced to rebuild the application image to include the revisions. This is a lot of time wasted, especially when my application image takes a significant amount of time to build. This also means that I'm maintaining both an application image and an operator image. If my operator repository doesn't include the application image, then I'm making related changes across repositories. Ultimately, I'm multiplying the number of commits I make, and complicating my workflow. Every change means I have to manage and synchronize commits and image references between repositories.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

Given these challenges, I wanted to find a way to keep my Job scripts within my operator's code base. This way, I could revise my scripts in tandem with my operator's reconciliation logic. My goal was to devise a workflow that would only require me to rebuild the operator's image when I needed to make revisions to my scripts. Fortunately, I use the Go programming language, which provides the immensely helpful go:embed feature. This allows developers to package text files in with their application's binary. By leveraging this feature, I've found that I can maintain my Job scripts within my operator's image.

Embed Job script

For demonstration purposes, my task script doesn't include any actual business logic. However, by using an embedded script rather than writing the script directly into the Job manifest, this approach keeps complex scripts both well-organized and abstracted from the Job definition itself.

Here's my simple example script:

$ cat embeds/task.sh
#!/bin/sh
echo "Starting task script."
# Something complicated...
echo "Task complete."

Now to work on the operator's logic.

Operator logic

Here's the process within my operator's reconciliation:

  1. Retrieve the script's contents
  2. Add the script's contents to a ConfigMap
  3. Run the ConfigMap's script within the Job by
    1. Defining a volume that refers to the ConfigMap
    2. Making the volume's contents executable
    3. Mounting the volume to the Job 

Here's the code:

// STEP 1: retrieve the script content from the codebase.
//go:embed embeds/task.sh
var taskScript string

func (r *MyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
        ctxlog := ctrllog.FromContext(ctx)
        myresource := &myresourcev1alpha.MyResource{}
        r.Get(ctx, req.NamespacedName, d)

        // STEP 2: create the ConfigMap with the script's content.
        configmap := &corev1.ConfigMap{}
        err := r.Get(ctx, types.NamespacedName{Name: "my-configmap", Namespace: myresource.Namespace}, configmap)
        if err != nil && apierrors.IsNotFound(err) {

                ctxlog.Info("Creating new ConfigMap")
                configmap := &corev1.ConfigMap{
                        ObjectMeta: metav1.ObjectMeta{
                                Name:      "my-configmap",
                                Namespace: myresource.Namespace,
                        },
                        Data: map[string]string{
                                "task.sh": taskScript,
                        },
                }

                err = ctrl.SetControllerReference(myresource, configmap, r.Scheme)
                if err != nil {
                        return ctrl.Result{}, err
                }
                err = r.Create(ctx, configmap)
                if err != nil {
                        ctxlog.Error(err, "Failed to create ConfigMap")
                        return ctrl.Result{}, err
                }
                return ctrl.Result{Requeue: true}, nil
        }

        // STEP 3: create the Job with the ConfigMap attached as a volume.
        job := &batchv1.Job{}
        err = r.Get(ctx, types.NamespacedName{Name: "my-job", Namespace: myresource.Namespace}, job)
        if err != nil && apierrors.IsNotFound(err) {

                ctxlog.Info("Creating new Job")
                configmapMode := int32(0554)
                job := &batchv1.Job{
                        ObjectMeta: metav1.ObjectMeta{
                                Name:      "my-job",
                                Namespace: myresource.Namespace,
                        },
                        Spec: batchv1.JobSpec{
                                Template: corev1.PodTemplateSpec{
                                        Spec: corev1.PodSpec{
                                                RestartPolicy: corev1.RestartPolicyNever,
                                                // STEP 3a: define the ConfigMap as a volume.
                                                Volumes: []corev1.Volume{{
                                                        Name: "task-script-volume",
                                                        VolumeSource: corev1.VolumeSource{
                                                                ConfigMap: &corev1.ConfigMapVolumeSource{
                                                                        LocalObjectReference: corev1.LocalObjectReference{
                                                                                Name: "my-configmap",
                                                                        },
                                                                        DefaultMode: &configmapMode,
                                                                },
                                                        },
                                                }},
                                                Containers: []corev1.Container{
                                                        {
                                                                Name:  "task",
                                                                Image: "busybox",
                                                                Resources: corev1.ResourceRequirements{
                                                                        Requests: corev1.ResourceList{
                                                                                corev1.ResourceCPU:    *resource.NewMilliQuantity(int64(50), resource.DecimalSI),
                                                                                corev1.ResourceMemory: *resource.NewScaledQuantity(int64(250), resource.Mega),
                                                                        },
                                                                        Limits: corev1.ResourceList{
                                                                                corev1.ResourceCPU:    *resource.NewMilliQuantity(int64(100), resource.DecimalSI),
                                                                                corev1.ResourceMemory: *resource.NewScaledQuantity(int64(500), resource.Mega),
                                                                        },
                                                                },
                                                                // STEP 3b: mount the ConfigMap volume.
                                                                VolumeMounts: []corev1.VolumeMount{{
                                                                        Name:      "task-script-volume",
                                                                        MountPath: "/scripts",
                                                                        ReadOnly:  true,
                                                                }},
                                                                // STEP 3c: run the volume-mounted script.
                                                                Command: []string{"/scripts/task.sh"},
                                                        },
                                                },
                                        },
                                },
                        },
                }

                err = ctrl.SetControllerReference(myresource, job, r.Scheme)
                if err != nil {
                        return ctrl.Result{}, err
                }
                err = r.Create(ctx, job)
                if err != nil {
                        ctxlog.Error(err, "Failed to create Job")
                        return ctrl.Result{}, err
                }
                return ctrl.Result{Requeue: true}, nil
        }

        // Requeue if the job is not complete.
        if *job.Spec.Completions == 0 {
                ctxlog.Info("Requeuing to wait for Job to complete")
                return ctrl.Result{RequeueAfter: time.Second * 15}, nil
        }

        ctxlog.Info("All done")
        return ctrl.Result{}, nil
}

After my operator defines the Job, all that's left to do is wait for the Job to complete. Looking at my operator's logs, I can see each step in the process recorded until the reconciliation is complete:

2022-08-07T18:25:11.739Z  INFO  controller.myresource   Creating new ConfigMap  {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}
2022-08-07T18:25:11.765Z  INFO  controller.myresource   Creating new Job        {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}
2022-08-07T18:25:11.780Z  INFO  controller.myresource   All done        {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}Go for Kubernetes

When it comes to managing scripts within operator-managed workloads and applications, go:embed provides a useful mechanism for simplifying the development workflow and abstracting business logic. As your operator and its scripts become more complex, this kind of abstraction and separation of concerns becomes increasingly important for the maintainability and clarity of your operator.

Embed scripts into your Kubernetes operators with Go.

Kubernetes Go programming language What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I troubleshoot swappiness and startup time on Linux

Tue, 09/13/2022 - 15:00
How I troubleshoot swappiness and startup time on Linux David Both Tue, 09/13/2022 - 03:00

I recently experienced another interesting problem in the Linux startup sequence that has a circumvention–not a solution. It started quite unexpectedly.

I was writing a couple of articles while making some updates to my personal copy of my series of books, "Using and Administering Linux: Zero to SysAdmin." I had four instances of LibreOffice Write open to doing all that. I had three VMs running with VirtualBox to test some of the things I was writing about. I also had LibreOffice Impress open to work on an unrelated presentation. I like to listen to music, so I had one of several tabs in Firefox open to Pandora, my music streaming service of choice. I had multiple Bash shells open using Konsole with numerous tabs and the Alpine text-mode email client in one. Then there were the various tabs in the Thunar file manager.

So I had a lot going on. Just like I do now as I write this article.

The symptoms

As I used these open sessions, I noticed that things slowed down considerably while waiting for the system to write a document to the M.3 SSD–a process that should have been really fast. I also noticed that the music was choppy and dropped out completely every few minutes. Overall performance was generally poor. I began to think that Fedora had a serious problem.

My primary workstation, the one I was working on at the time, has 64GB of RAM and an Intel Core i9 Extreme with 16 cores and Hyperthreading (32 CPUs) that can run as fast as 4.1 GHz using my configured overclocking. So I should not have experienced any slowdowns–or so I thought at the time.

Determine the problem

It did not take long to find the problem because I have experienced similar symptoms before on systems with far less memory. The issue looked like delays due to page swapping. But why?

I started with one of my go-to tools for problem determination, htop. It showed that the system was using 13.6GB of memory for programs, and most of the rest of the RAM was in cache and buffers. It also showed that swapping was actively occurring and that about 253MB of data was stored in the swap partitions.

Date & Time: 2022-08-12 10:53:08
Uptime: 2 days, 23:47:15
Tasks: 200, 1559 thr, 371 kthr; 4 running
Load average: 3.97 3.05 2.08
   
Disk IO: 202.6% read: 687M write: 188K
Network: rx: 0KiB/s tx: 0KiB/s (0/0 packets)
Systemd: running (0/662 failed) (0/7912 jobs)    
Mem[|||||||##*@@@@@@@@@@@@@@@@@@@@@@@@@@    13.6G/62.5G]
Swp[||#                                      253M/18.0G]

But that meant I still had lots of memory left the system could use directly for programs and data and more that it could recover from cache and buffers. So why was this system even swapping at all?

I remembered hearing about the "swappiness" factor in one of my Red Hat training classes. But that was a long time ago. I did some searches on "swappiness" to learn about the kernel setting vm.swappiness.

The default value for this kernel parameter is 60. That represents the percent of free memory not yet in use. When the system reaches that 60% trigger point, it begins to swap, no matter how much free memory is available. My system started swapping when about 0.6 * 62.5GB = 37.5GB of unused memory remained.

Based on my online reading, I discovered that 10% is a better setting for many Linux systems. With that setting, swapping starts when only 10% of RAM is free.

I checked the current swappiness setting on my system, and it was set to the default.

# sysctl vm.swappiness
vm.swappiness = 60

Time to change this kernel setting.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles Fix the issue

I won't dive into the gory details, but the bottom line is that either of the following commands, run as root, will instantly do the job on a running Linux computer without a reboot.

# sysctl -w vm.swappiness=10

You could also use this next command to do the same thing.

# echo 10 > /proc/vm/swappiness

Tecmint has an excellent article about setting kernel parameters.

Both commands change the live kernel setting in the /proc filesystem. After running either of those commands, you should run the sysctl vm.swappiness command to verify that the kernel setting has changed.

But those commands only change the swappiness value for the currently running system. A reboot returns the value to its default. I needed to ensure that this change is made persistent across reboots.

But first, the failure

To permanently change the kernel vm.swappiness variable, I used the procedure described in my previous article, How I disabled IPv6 on Linux, to add the following line to the end of the /etc/default/grub file:

GRUB_CMDLINE_LINUX="vm.swappiness=1"

I then ran the grub2-mkconfig command as root to rebuild the /boot/grub2/grub.cfg file. However, testing with VMs and real hardware showed that it did not work, and the swappiness value did not change. So I tried another approach.

And the success

Between this failure at startup time, the one I describe in the How I disabled IPv6 on Linux article, and other startup issues I explored due to encountering those two, I decided that this was a Linux startup timing problem. In other words, some required services, one of which might be the network itself, were not up and running, which prevented these kernel option changes from being committed to the /proc filesystem, or they were committed and then overwritten when the service started.

I could make all of these work as they should by adding them to a new file, /etc/sysctl.d/local-sysctl.conf with the following content, which includes all of my local kernel option changes:

###############################################
#            local-sysctl.conf                #
#                                             #
# Local kernel option settings.               #
# Install this file in the /etc/sysctl.d      #
# directory.                                  #
#                                             #
# Use the command:                            #
# sysctl -p /etc/sysctl.d/local-sysctl.conf   #
# to activate.                                #
#                                             #
###############################################
###############################################
# Local Network settings                      #
# Specifically to disable IPV6                #
###############################################
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

###############################################
# Virtual Memory                              #
###############################################
# Set swappiness
vm.swappiness = 1

I then ran the following command, which activated only the kernel options in the specified file:

# sysctl -p /etc/sysctl.d/local-sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
vm.swappiness = 13

This is a more targeted approach to setting kernel options than I used in my article about disabling IPv6.

Reporting the bug

At the time of this writing, there is no true fix for the root cause of this problem–whatever the cause. There is a way to temporarily circumvent the issue until a fix is provided. I used the /etc/sysctl.d/local-sysctl.conf file that I had created for testing and added a systemd service to run at the end of the startup sequence, wait for a few seconds, and run sysctl on that new file. The details of how to do that are in the How I disabled IPv6 on Linux article.

I had already reported this as bug 2103517 using Red Hat's Bugzilla when trying to disable IPv6. I added this new information to that bug to ensure that my latest findings were available to the kernel developers.

You can follow the link to view the bug report. You do not need an account to view bug reports.

Final thoughts

After experimenting to see how well I could reproduce the symptoms, along with many others, I have determined that the vm.swappiness setting of 60% is far too aggressive for many large-memory Linux systems. Without a lot more data points than those of my own computers, all I can tentatively conclude is that systems with huge amounts of RAM that get used only infrequently are the primary victims of this problem.

The immediate solution to the problem of local kernel option settings not working is to set them after startup. The automation I implemented is a good example of how to use systemd to replace the old SystemV startup file rc.local.

This bug had not been previously reported. It took a few days of experimenting to verify that the general problem in which locally-set kernel options were not being set or retained at startup time was easily repeatable on multiple physical and virtual systems. At that point, I felt it important to report the bug to ensure it gets fixed. Reporting it is another way I can give back to the Linux community.

I recently experienced another interesting problem in the Linux startup sequence that has a circumvention–not a solution. It started quite unexpectedly.

Image by:

opensource.com

Linux Sysadmin What to read next How I recovered my Linux system using a Live USB device This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source blockchain development: Get started with Hyperledger FireFly

Tue, 09/13/2022 - 15:00
Open source blockchain development: Get started with Hyperledger FireFly Nicko Guyer Tue, 09/13/2022 - 03:00

It takes more than a blockchain node to build enterprise-grade applications at scale. As a result, developers often find themselves building plumbing from scratch to make their business logic work. The release of Hyperledger FireFly changed blockchain development, offering developers a full stack of tools to build and scale secure web applications using familiar APIs. FireFly’s next-gen platform simplifies development, making it easy to connect across multiple public and private chains while running many use cases simultaneously. Whether you want to build on permissioned chains like Hyperledger Fabric, Corda, or Enterprise Ethereum, or public chains like Ethereum, Polygon, Avalanche, Optimism, BNB Chain, Arbitrum, Moonbeam, or Fantom, FireFly has you covered.

In this article I'll walk you through where to download Hyperledger FireFly, how to set up a local development environment, and introduce you to the FireFly Sandbox. But first, a quick introduction to the Supernode.

What is a Supernode?

Hyperledger FireFly is an open source project that was contributed to the Hyperledger Foundation by Kaleido, a blockchain and digital asset platform provider. To make FireFly a reality, the Raleigh, NC-based company collaborated with the blockchain community to fit vital technology components into an enterprise-grade, pluggable development and runtime stack called a Supernode.

Image by:

(Nicko Guyer, CC BY-SA 4.0)

This development stack offers three key advantages to blockchain developers, especially those looking to build enterprise-grade applications with scale.

  • Accelerate: Hyperledger FireFly helps developers create applications on the blockchain protocol of their choice, and build quickly with familiar REST APIs. Users can leverage pre-built services for tokens, wallets, storage, and identity to reach production faster.
  • Orchestrate: Hyperledger FireFly makes it easier to manage data end-to-end from blockchain to back-office. APIs allow developers to trigger business processes based on blockchain activities, and off-chain storage and messaging to protect sensitive data.
  • Support: Hyperledger FireFly supports high-volume workloads, integrates with existing IT systems and databases, and communicates with network participants.
Getting Started with Hyperledger

FireFly makes it super easy to build powerful blockchain applications. Installing the stack on your machine is a simple process, too. Below I'm going to walk you through the three step process to get up and running so you can start testing the FireFly functionality today.

Image by:

(Nicko Guyer, CC BY-SA 4.0)

Install FireFly CLI

The FireFly command-line interface (CLI) creates local FireFly stacks for offline development of blockchain apps. Having FireFly locally allows developers to test and iterate on ideas without worrying about setting up extra infrastructure.

The easiest way to install the FireFly CLI is to download a pre-compiled binary of the latest release. To do this, visit the release page.

Next, extract the binary and move it to /usr/bin/local. Assuming you downloaded the package from GitHub into your Downloads directory:

$ sudo tar -zxf ~/Downloads/firefly-cli_*.tar.gz -C /usr/local/bin ff

This places the ff executable in /usr/local/bin.

If you downloaded the package from GitHub to a different directory, change the tar command above to wherever the firefly-cli_*.tar.gz file is located.

Alternatively, you can install the FireFly CLI using Go. If you have a local Go development environment, and you have included ${GOPATH}/bin in your path, you can use Go to install the FireFly CLI by running:

$ go install github.com/hyperledger/firefly-cli/ff@latest

Finally, verify the installation by running ff version. This prints the current version:

{ "Version": "v1.1.0", "License": "Apache-2.0" }

With the FireFly CLI installed, you are ready to run some Supernodes on your machine.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Start Your Environment

A FireFly stack is a collection of Supernodes that work together on a single development machine. A stack has multiple members (also referred to as organizations). Every member has their own Supernode within the stack. This allows developers to build and test data flows with a mix of public and private data between various parties, all within a single development environment.

Image by:

(Nicko Guyer, CC BY-SA 4.0)

Creating a new FireFly stack is relatively easy. The ff init command creates a new stack for you, and prompts you for a few details such as the name, and how many members you want in your stack.

There are also some settings you can change. The defaults are the simplest way to get going, but you can see a full list of options by running ff init --help.

Once you've created your stack, use the command ff start dev to run your environment.

After your stack has started, it prints the links to each member's UI, and the Sandbox for that node.

Use the FireFly Sandbox Image by:

(Nicko Guyer, CC BY-SA 4.0)

Each member gets an instance of the FireFly Sandbox as well. The Sandbox is like an example application. It can be used to test, iterate, and practice using FireFly features. It provides code snippets as examples of how to build those features into your own application backend.

There are a couple of things in the Sandbox you may want to check out to experience the full capabilities of Hyperledger FireFly.

The Messages tab helps you send messages, and view the data payload, in every member's FireFly Explorer, or send a private message to one member, and verify that the data payload is not visible in a third member's FireFly Explorer. You can send an image file, and download it from another member's FireFly Explorer.

The Tokens tab creates a non-fungible token pool, and allows you to transfer an NFT to another member and verify the account balances in FireFly Explorer.

The Contracts tab can create a contract interface and API, then view the Swagger UI for your new API, or create an event listener. You can also use the Swagger UI to call a smart contract function that emits an event. Any event received in the Sandbox also shows up in FireFly Explorer.

Build your app

Hyperledger FireFly brings a complete open source stack for developers who want to build and scale secure, enterprise-grade applications with access to blockchain technology. It's simple to install on your machine, and the Sandbox allows developers to view code snippets and test ideas—all so blockchain applications reach production faster. Read more about Hyperledger FireFly's capabilities in the project documentation and give it a try yourself.

Hyperledger FireFly brings a complete open source stack for developers who want to build and scale secure, enterprise-grade applications with access to blockchain technology.

Image by:

Opensource.com

Blockchain What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Remixing Linux for blind and visually impaired users

Tue, 09/13/2022 - 15:00
Remixing Linux for blind and visually impaired users Vojtech Polasek Tue, 09/13/2022 - 03:00

When I was around 5 years old, my father brought home our first computer. From that moment on, I knew I wanted to pursue a career in computers. I haven't stopped hanging around them since. During high school, when considering which specific area I wanted to focus on, I started experimenting with hacking, and that was the moment I decided to pursue a career as a security engineer.

I'm now a software engineer on the security compliance team. I've been at Red Hat for over two years, and I work remotely in the Czech Republic. I've used Linux for about 12 years, mainly Arch Linux and Fedora, but I've also administered Debian, Gentoo, and Ubuntu in the past.

Image by:

(Vojtech Polasek, CC BY-SA 4.0)

Photo description: Black and white image of a smiling Vojtech, with a red frame around it and an illustrated paper airplane in the background.

Outside of my day job, I play blind football, and I'm involved in various projects connecting visually impaired and sighted people together, including working in a small NGO that runs activities for blind and visually impaired people. I'm also working on an accessible Fedora project currently called Fegora, an unofficial Linux distribution aimed at visually impaired users.

The assistive technology stack

When I use a smart device, I need several pieces of assistive technology. The first and most essential is called a screen reader. This is software that presents what's on the screen to blind or visually impaired people, either through speech or through braille (basically, it tries to serve as our eyes). It can read out notifications and tell me which button or page element I'm focusing on, allowing me to interact with graphical user interfaces.

Screen readers use speech synthesis to speak aloud what appears on the screen. There are a variety of speech synthesizers, and some voices are more "natural-sounding" than others. The one I use, Espeak, is not very natural-sounding, but it's lightweight and fast. It also supports almost all languages, including Czech (which I use).

Finally, I use a Braille display, a device that represents a line of text in Braille. I use this a lot, especially when I'm coding or doing code reviews. It's easier to grasp the structure of code when I can freely move from one code element to another by touch. I can also use its buttons to move the cursor to the character or area of the screen I'm interested in, and it has a Braille keyboard too if I want to use it.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles How I use assistive technology on a daily basis

When using a computer as a blind or visually impaired person, there are a couple of things that are relatively straightforward to do using the tech above. Personally, these are a few of the things I do every day:

  • The text console is pretty much my favorite application. As a general rule, when something's in text, then blind people can read it with a screen reader (this doesn't hold true in all cases, but in most.) I mainly use the console for system management, text editing, and working with guidance and documentation.
  • I browse the web and interact with websites.
  • I code and do code reviews using VSCode and Eclipse.
  • I send emails and instant messages.
  • I can use word processing software, like Google Docs (which is not open source, but common in the modern office) and LibreOffice. Google Docs developers have added a lot of keyboard shortcuts, which I can use to move around documents, jump to headings or into comments, and so on.
  • I can play multimedia, usually. It depends on how the application is written. Some media players are more accessible than others.
Possible but painful

This brings me to tasks that aren't so easy. I like to call these "possible but painful".

PDF files can be difficult. Sometimes I end up needing to use optical character recognition (OCR) software to convert images to text. For example, recently I needed to read a menu for a restaurant. They had the PDF of their menu on their website, but it had been flattened, and didn't have a text layer. For me, this shows up as a blank screen. I had to use an OCR application from my smartphone to extract the text for me. Not only is this an extra step, but the resulting "translation" of the text isn't always entirely accurate.

Viewing and creating a presentation can be problematic. To work around this, I create slides in HTML, using software such as Pandoc, which can process markdown and convert it into slides. I've been using this for many years and it works well—it allows me total control of the resulting slides, because the markdown is just simple text.

Video games can be made more accessible by basing them on sound or text. However, playing games can be doubly challenging on Linux as not only would you need to find an accessible game, but most PC games are also native to Windows so you would be dealing with some compatibility issues as well.

Some websites and interfaces are more difficult to navigate than others. These issues are often quite easy to solve just by setting some attributes correctly. In general, lots of web content comes in the form of images, especially today. One of the easiest ways to make web content more accessible is to make sure that alternative text is added to images so that screen readers can read it out, and people who cannot distinguish the image have some idea what's there. Another thing I experience a lot is unlabeled controls: you know there's a button or a check box but you don't know what it does.

The Fegora project optimises Linux for accessibility

Developers don't intentionally set out to build applications that aren't accessible. The problem is that they usually don't know how to test them. There aren't many blind Linux users, so there aren't many people testing the accessibility of applications and providing feedback. Therefore, developers don't produce accessible applications, and they don't get many users. And so the cycle continues.

This is one thing we hope to tackle with the Fegora project. We want to create a Fedora remix that's user-friendly for visually impaired and blind users. We hope it will attract more users, and that those users start discovering issues to report, which will hopefully be solved by other developers in the open source community.

So why are we doing this? Well, it's important to point out that Fedora is not an inaccessible distribution by design. It does have many accessibility tools available in the form of packages. But these aren't always present from the beginning, and there are a lot of small things which need to be configured before it can be proficiently used. This is something that can be discouraging to a beginner Fedora user.

We want Fegora to be as friendly and predictable for a blind user as possible. When a user launches a live image, the screen immediately starts being read as soon as a graphical user interface appears. All environment variables needed for accessibility are loaded and configured correctly.

Fegora brings the following changes, among others:

  • Environment variables for accessibility are configured from the start.
  • The Orca screen reader starts as soon as the graphical interface loads.
  • A custom repo is added with extra voice synthesis and packaged software.
  • Many alternative keyboard shortcuts have been added.
  • There's a special script that can turn your monitor on and off. Many users do not need the monitor at all and having it off is a great power saver!
So how can you help?

First, if you'd like to contribute to Fegora (or just spread the word), you can find out more on our repository.

Additionally, when working on a team with someone who has a visual impairment, there might be some additional considerations depending on the accessibility tech being used. For example, it's not easy for us to listen to someone and read at the same time, because we are basically getting both things through audio, unless someone is very proficient with the Braille display.

Lastly, bear in mind that blind and visually impaired users consume the same end products as you do, whether that's presentation slides or websites or PDFs. When building products or creating content, your choices have a huge effect on accessibility and how easy it is for us to engage with the end result. Know that we are here, we love to use computers and technology, and we're often willing to help you test it, too.

Image by:

(Vojtech Polasek, CC BY-SA 4.0)

 Image description: Vojtech holding a football. He is wearing a football uniform and protective goggles.

Fegora, a Fedora project, is an unofficial Linux distribution aimed at visually impaired users.

Image by:

Opensource.com

Linux Accessibility Diversity and inclusion What to read next Use this open source screen reader on Windows Usability and accessibility start with open communication This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I recovered my Linux system using a Live USB device

Mon, 09/12/2022 - 15:00
How I recovered my Linux system using a Live USB device David Both Mon, 09/12/2022 - 03:00

I have a dozen or so physical computers in my home lab and even more VMs. I use most of these systems for testing and experimentation. I frequently write about using automation to make sysadmin tasks easier. I have also written in multiple places that I learn more from my own mistakes than I do in almost any other way.

I have learned a lot during the last couple of weeks.

I created a major problem for myself. Having been a sysadmin for years and written hundreds of articles and five books about Linux, I really should have known better. Then again, we all make mistakes, which is an important lesson: You're never too experienced to make a mistake.

I'm not going to discuss the details of my error. It's enough to tell you that it was a mistake and that I should have put a lot more thought into what I was doing before I did it. Besides, the details aren't really the point. Experience can't save you from every mistake you're going to make, but it can help you in recovery. And that's literally what this article is about: Using a Live USB distribution to boot and enter a recovery mode.

The problem

First, I created the problem, which was essentially a bad configuration for the /etc/default/grub file. Next, I used Ansible to distribute the misconfigured file to all my physical computers and run grub2-mkconfig. All 12 of them. Really, really fast.

All but two failed to boot. They crashed during the very early stages of Linux startup with various errors indicating that the /root filesystem could not be located.

I could use the root password to get into "maintenance" mode, but without /root mounted, it was impossible to access even the simplest tools. Booting directly to the recovery kernel did not work either. The systems were truly broken.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Recovery mode with Fedora

The only way to resolve this problem was to find a way to get into recovery mode. When all else fails, Fedora provides a really cool tool: The same Live USB thumb drive used to install new instances of Fedora.

After setting the BIOS to boot from the Live USB device, I booted into the Fedora 36 Xfce live user desktop. I opened two terminal sessions next to each other on the desktop and switched to root privilege in both.

I ran lsblk in one for reference. I used the results to identify the / root partition and the boot and efi partitions. I used one of my VMs, as seen below. There is no efi partition in this case because this VM does not use UEFI.

# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0           7:0    0  1.5G  1 loop
loop1           7:1    0    6G  1 loop
├─live-rw     253:0    0    6G  0 dm   /
└─live-base   253:1    0    6G  1 dm  
loop2           7:2    0   32G  0 loop
└─live-rw     253:0    0    6G  0 dm   /
sda             8:0    0  120G  0 disk
├─sda1          8:1    0    1G  0 part
└─sda2          8:2    0  119G  0 part
  ├─vg01-swap 253:2    0    4G  0 lvm  
  ├─vg01-tmp  253:3    0   10G  0 lvm  
  ├─vg01-var  253:4    0   20G  0 lvm  
  ├─vg01-home 253:5    0    5G  0 lvm  
  ├─vg01-usr  253:6    0   20G  0 lvm  
  └─vg01-root 253:7    0    5G  0 lvm  
sr0            11:0    1  1.6G  0 rom  /run/initramfs/live
zram0         252:0    0    8G  0 disk [SWAP]

The /dev/sda1 partition is easily identifiable as /boot, and the root partition is pretty obvious as well.

In the other terminal session, I performed a series of steps to recover my systems. The specific volume group names and device partitions such as /dev/sda1 will differ for your systems. The commands shown here are specific to my situation.

The objective is to boot and get through startup using the Live USB, then mount only the necessary filesystems in an image directory and run the chroot command to run Linux in the chrooted image directory. This approach bypasses the damaged GRUB (or other) configuration files. However, it provides a complete running system with all the original filesystems mounted for recovery, both as the source of the tools required and the target of the changes to be made.

Here are the steps and related commands:

1. Create the directory /mnt/sysimage to provide a location for the chroot directory.

2. Mount the root partition on /mnt/sysimage:

# mount /dev/mapper/vg01-root /mnt/sysimage

3. Make /mnt/sysimage your working directory:

# cd /mnt/sysimage

4. Mount the /boot and /boot/efi filesystems.

5. Mount the other main filesystems. Filesystems like /home and /tmp are not needed for this procedure:

# mount /dev/mapper/vg01-usr usr

# mount /dev/mapper/vg01-var var

6. Mount important but already mounted filesystems that must be shared between the chrooted system and the original Live system, which is still out there and running:

# mount --bind /sys sys

# mount --bind /proc proc

7. Be sure to do the /dev directory last, or the other filesystems won't mount:

# mount --bind /dev dev

8. Chroot the system image:

# chroot /mnt/sysimage

The system is now ready for whatever you need to do to recover it to a working state. However, one time I was able to run my server for several days in this state until I could research and test real fixes. I don't really recommend that, but it can be an option in a dire emergency when things just need to get up and running–now!

The solution

The fix was easy once I got each system into recovery mode. Because my systems now worked just as if they had booted successfully, I simply made the necessary changes to /etc/default/grub and /etc/fstab and ran the grub2-mkconfig > boot/grub2/grub.cfg command. I used the exit command to exit from chroot and then rebooted the host.

Of course, I could not automate the recovery from my mishap. I had to perform this entire process manually on each host—a fitting bit of karmic retribution for using automation to quickly and easily propagate my own errors.

Lessons learned

Despite their usefulness, I used to hate the "Lessons Learned" sessions we would have at some of my sysadmin jobs, but it does appear that I need to remind myself of a few things. So here are my "Lessons Learned" from this self-inflicted fiasco.

First, the ten systems that failed to boot used a different volume group naming scheme, and my new GRUB configuration failed to consider that. I just ignored the fact that they might possibly be different.

  • Think it through completely.
  • Not all systems are alike.
  • Test everything.
  • Verify everything.
  • Never make assumptions.

Everything now works fine. Hopefully, I am a little bit smarter, too.

The Fedora Live USB distribution provides an effective solution to boot and enter a recovery mode.

Image by:

Photo by Markus Winkler on Unsplash

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Control your home automation remotely with Raspberry Pi and Traefik Hub

Mon, 09/12/2022 - 15:00
Control your home automation remotely with Raspberry Pi and Traefik Hub Nicolas Mengin Mon, 09/12/2022 - 03:00

Over the years, several friends have asked me for tips on managing their home networks. In most cases, they are setting up home automation and want to access their services from the outside.

Every time I helped them, each made the same comment: "Are you kidding? It cannot be so complicated to publish one simple application!"

Publishing applications that don't put your network or cluster at risk can indeed be quite complicated. When we started working on Traefik Hub—the latest product by Traefik Labs—I knew it would be a game-changer for publishing applications.

This article demonstrates the complexity of publishing services and how Traefik Hub makes your life a lot easier. I use the example of setting up a server to control your home automation remotely with Traefik Hub running on a Raspberry Pi.

The challenge

Setting up a server to manage your home automation is nice, but being able to control it remotely from anywhere in the world using only your mobile phone—that's even nicer!

However, with great power comes great responsibility. If you want access to your local network from the outside, you'd better ensure it's resilient and that you are the only one with access.

First, I'll look at the steps you would normally take to achieve that.

Reach your Home Assistant remotely any time

Home Assistant is a well-known solution to manage home automation devices. It's an open source project written in Python. It allows you to have Home Automation with a local installation: No data on the cloud and everything is kept private. I recommend this excellent article to help you install Home Assistant on your Raspberry Pi using Docker.

To reach your Home Assistant from the outside, you must expose your Raspberry Pi to the internet. To do so, you have to:

Note: Most internet providers assign dynamic public IPs—each time your router restarts, your IP will probably change. To build a resilient system, you would also need a dynamic domain.

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Encryption matters

When you communicate with your server, you send sensitive data, such as your username and password. You must verify and encrypt communication using a TLS certificate to avoid this data being stolen.

This requires:

TL;DR

To sum it up, after installing Home Assistant on your Raspberry Pi, you need to:

  • Get your router public IP.
  • Create a port forward to your Raspberry Pi.
  • Buy a domain name.
  • Create a dynamic domain.
  • Install a reverse proxy and configure it for encrypted access using a TLS certificate.

Now, imagine if you could skip all of the steps above and publish your services in a few clicks!

Traefik Hub to the rescue

Traefik Hub is a cloud-native networking SaaS platform that allows users to publish their services at the edge quickly. Using Traefik Hub, you can publish your Home Assistant application in a few clicks.

Remember the challenges I mentioned earlier? Scratch that. Once you have Home Assistant installed on your Raspberry Pi, all you have to do is connect your Raspberry Pi to Traefik Hub. Traefik Hub handles everything for you, including:

  • Making your service reachable from the internet.
  • Providing a dynamic domain (for free).
  • Encrypting communication with a TLS certificate and an Access Control Policy.

And now that I have introduced Traefik Hub, I'll get down to the business of configuring it.

Step 1: Connect your Raspberry Pi to Traefik Hub

First, head over to Traefik Hub and sign up for a free account. You can sign up via Google or GitHub.

You need to add a new agent to connect your Raspberry Pi to Traefik Hub.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Traefik Hub provides several snippets that allow you to start from scratch.

Since the Home Assistant setup is a bit complex, you can get your token from the Hub UI and use the script below for this example. The token allows you to connect your agent to Traefik Hub. Traefik Hub then attaches this agent to your account, and you can start publishing your services.

Here's the script:

version: '3'

networks:
  traefik: {}

services:
  homeassistant:
    container_name: homeassistant
    image: "ghcr.io/home-assistant/home-assistant:stable"
    volumes:
                  # /!\\ Mount the custom configuration file described below /!\\
      - ./configuration.yaml:/config/configuration.yaml
      - /etc/localtime:/etc/localtime:ro
    restart: unless-stopped
    privileged: true
                networks:
      - traefik
                ports:
                        - 8123
 
        # Start the agent with the latest version
  hub-agent:
    image: ghcr.io/traefik/hub-agent-traefik:v0.7.2
    restart: "on-failure"
    container_name: hub-agent
    networks:
      - traefik
                command:
      - run
      - --hub.token= # Set your token here
      - --auth-server.advertise-url=http://hub-agent
      - --traefik.host=traefik
      - --traefik.tls.insecure=true
      - --hub.url=https://platform.hub.traefik.io/agent
      - --hub.ui.url=https://hub.traefik.io
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    depends_on:
      - traefik

  # Start Traefik with the latest version
  traefik:
    image: traefik:v2.8
    container_name: traefik
    networks:
      - traefik
    command:
      # Enable Hub communication (open the port 9900 and 9901 by default)
      - --experimental.hub=true
      - --hub.tls.insecure=true
      - --metrics.prometheus.addrouterslabels=true # ./configuration.yaml to mount on your home assistant container
# in /config/configuration.yaml

# These modifications are required by home assistant to be exposed using
# a third party software such as the Traefik Hub agent

# Loads default set of integrations. Do not remove.
default_config:

http:
  ip_ban_enabled: true
  login_attempts_threshold: 5
  use_x_forwarded_for: true
  trusted_proxies:
    - 192.168.1.0/24
    - 172.18.0.0/24
    - 127.0.0.1
    - ::1
    - fe80::/64
    - fe00::/64
    - fd00::/64

# Text to speech
tts:
  - platform: google_translateStep 2: Publish your service

Once you have installed the agent on your Raspberry Pi, Traefik Hub discovers every service running on your cluster so you can publish them without digging into your configuration files.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Select your Home Assistant service, and click the Save and Publish button to publish it.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

And now let the magic happen!

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Once Hub notifies you that your service has been published, you can reach it from the internet using the domain Traefik Hub has generated. The connection is verifiable and encrypted, and your Home Assistant remains reachable even if your public IP changes.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Behind the scenes

Your application is published. Next, I'll discuss a few things Traefik Hub takes care of behind the scenes to offer a seamless experience and some handy configuration options.

Traefik instance

When you installed the Traefik Hub Agent, you certainly noticed that it comes with a Traefik Proxy instance.

Traefik Hub creates a tunnel between its platform and the agent you installed on your Raspberry Pi to publish your service on the internet. The agent passes through the requests to open source Traefik Proxy, which is used as an Ingress Controller. Traefik Hub manages both the domain and the TLS certificate, and it shares the certificate with your Traefik instance to allow it to do the TLS termination.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Access Control Policy

Another point to remember is that a deployed Home Assistant application comes with its own login system. However, when you publish a service using Traefik Hub, you can restrict access further by using an Access Control Policy such as JWT and Basic Auth.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Kubernetes

If you are a Kubernetes user, you can also publish your Kubernetes Services. Traefik Hub can manage Kubernetes Services through the UI or a dedicated CRD.

Manage and monitor

Traefik Hub also provides a web UI that allows you to manage and monitor the exposition of services.

Image by:

(Nicolas Mengin, CC BY-SA 4.0)

Wrap up

This article started by going through a long and complicated list of tasks that come with publishing an application over an encrypted and verifiable connection. Setting up home automation is an excellent example of that level of complexity. But when things seem impossibly hard, there's always an easier alternative! Traefik Hub makes your life simpler by taking over most of the mundane operations tasks, saving time, and allowing developers to focus on building applications.

Now you can turn the lights on in your house, even if you're on the other side of the world!

If you're interested in learning more about Traefik Hub, check out this getting started article. Traefik Hub is currently in Beta, so please don't hesitate to give it a try and provide feedback—you can do so directly in the UI.

I hope you found this article helpful, and thank you for reading!

Employ open source networking to facilitate cloud-native apps.

Image by:

27707 via Pixabay, CC0. Modified by Jen Wike Huger.

Home automation Cloud Edge computing Raspberry Pi What to read next Why choose open source for your home automation project This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 ways to resize and optimize images for the web on Linux

Sat, 09/10/2022 - 15:00
5 ways to resize and optimize images for the web on Linux Seth Kenlon Sat, 09/10/2022 - 03:00

There was a time when 5 MB was the reasonable maximum size for an email attachment. Today, it's easily possible for a single photo to be 5 MB. Accordingly, the maximum attachment size has increased to, say, 25 MB. But of course file sizes are getting bigger and bigger too, and so eventually the attachment limit will go up too. It's an endless cycle, common in the digital world: the tools are built for today's data, and today's data increases in complexity and size until the tools are revised and improved. You have to contain data, preferably in the smallest packaging possible, so that sharing it online goes faster for everyone. Here are five ways to optimize images for the Internet.

What image size is good for the web?

First of all, there are two kinds of "sizes" when discussing digital images. The image size represents how many pixels wide and high an image is when you look at it on your screen. The file size represents how many bytes on a hard drive or SD card the image uses. It's the file size that limits how easy something is to send it over the Internet because we all have different bandwidth allotments from our Internet providers and infrastructure. Of course, the larger the image size, the larger the file size, so the two are related.

To avoid confusion, in this article I use the term "image size" to refer to the pixel width and height of an image, and the term "file size" to refer to the bytes on a hard drive occupied by an image file.

It's hard to know exactly what a "reasonable" image size and file size is for a photo on the Internet or being sent over email. There are some reasonable expectations, though. If you're posting a photo to a website, whether it's your own blog or to social media, it's probable that most people are going to be viewing the photo at a resolution consistent with whatever's in stores. Your screen size, at least in 2022, is probably 1920 by 1080 (high definition, or HD) or thereabouts. Your photo, then, probably doesn't need to be any bigger than 1920 by 1080. Even people with a screen twice as big as yours could let your photo take up half their screen, which is probably sufficient.

The other part of the equation is the file format. Many file formats, like JPEG and PNG, imply a certain amount of compression. The more compression, the smaller the file size, but too much compression can render a blurry image. I like the WEBP format, which tends to have quality better than JPEG at a smaller file size. It's well supported by image applications and all major web browsers.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 1. Resize an image with Krita

The open source application Krita is technically a digital painting application but it happens to be a really great photo editor as well. I use it to load a photo, shrink it down to a reasonable size, and then save it in a web-optimized format.

Three easy steps:

  1. Go to the File menu and select Open to open your image in Krita.

  2. Go to the Image menu and select Scale image to new size. Type in the maximum width or height you want to resize your image to.

  3. Go to the File menu and select Save As and save the image as a WEBP image. Krita is smart and switches to WEBP as long as you use the .webp extension while saving your file (for example, myphoto.webp.)

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Krita is available for Linux, Windows, and macOS.

2. Resize an image with GIMP

The open source GNU Image Manipulation Program (GIMP) is a photo editor, and it can resize images.

Three easy steps:

  1. Go to the File menu and select Open to open your image.

  2. Go to the Image menu and select Scale image. Type in the maximum width or height you want to resize your image to.

  3. Go to the File menu and select Export As and save the image as a WEBP image. The application is smart and uses WEBP as long as you give the .webp extension while saving your file (for example, myphoto.webp.)

Image by:

(Seth Kenlon, CC BY-SA 4.0)

GNU Image Manipulation Program (GIMP) is available for Linux, Windows, and macOS.

3. Resize an image with ImageMagick

The ImageMagick suite is a set of terminal commands that can manipulate images without even opening the files in a user interface. It's a fast and efficient way to modify lots of images all in one go.

One easy step:

$ convert 2022-09-09-PHOTO.JPG -scale 1920x myphoto.webp

In this command, convert is the component of ImageMagick that performs conversion, and -scale is the option that resizes. The 1080^ argument specifies that the converted image must be 1029 pixels wide, and the height (left blank after the x character) is auto-calculated.

ImageMagick is available for Linux, macOS, and Windows.

4. Archive an image

Sometimes you may not want to resize an image, but you still need to reduce the file size (the bytes the file uses up on your hard drive or SD card.) Images from consumer cameras, like those found in phones, are often already highly compressed, which doesn't leave much for a computer to optimize without resizing it. However, professional cameras often shoot in formats that assume you want no or minimal compression, which means that you can reduce the file size of an image without loss of quality with a good archiving utility.

There are several archive utilities out there, and many may already be installed on your computer. For instance, if your computer can create ZIP archives, then you've already got the ZIP compression algorithm available.

Two easy steps:

  1. Open a file manager on your computer and locate the photo you want to compress.

  2. Right-click on the photo, and select Compress (on some operating systems, this may called Archive instead.)

Provided there's enough uncompressed data in your image to allow for compression, the archive version ought to be smaller in file size than the original. You can send the archive over the Internet, and the recipient can un-archive the image with

Image by:

(Seth Kenlon, CC BY-SA 4.0)

7-zip is an excellent archive tool for Linux, Windows, and macOS.

5. Split an image

If you're a Linux user, you can use the split command to cut an image into a few different pieces of a specific file size. Then you can send the pieces to someone, and they can reassemble the file using the cat command.

Assume the file 2022-09-09-PHOTO.JPG is 6.7 MB. You could cut it into four pieces by splitting it at every 2 MB. On your computer:

$ split 2022-09-09-PHOTO.JPG --bytes 2M
$ ls -l --human
[...] 6.7M Sep  7 14:50 2022-09-09-PHOTO.JPG
[...] 2.0M Sep  7 14:54 xaa
[...] 2.0M Sep  7 14:54 xab
[...] 2.0M Sep  7 14:54 xac
[...] 667K Sep  7 14:54 xad

On the recipient's computer:

$ cat xaa xab xac xad > myphoto.jpgSave space

In the eternal struggle between file size and carrier capacity, it's likely we'll always have to make concessions. Using open source tools to save space either through lossy compression, lossless compressed archiving, or through clever work-arounds, is a great way to save space and maximize speed of communication. Sure, a picture is worth a thousand words, but it doesn't have to take up a thousand megabytes!

Resize, archive, and split images to make big files better for the Internet.

Image by:

kreatikar via Pixabay, CC0

Art and design Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source matters in data analytics: Here's why

Fri, 09/09/2022 - 15:00
Open source matters in data analytics: Here's why Ray Paik Fri, 09/09/2022 - 03:00

It's been a little over a year since I wrote my article on Opensource.com, introducing the Cube community. As I worked with our community members and other vendors, I've become more convinced of the benefits of open source in data analytics. I also think it's good to remind ourselves periodically why open source matters and how it provides long-term benefits for everyone.

Benefits of open source for users and customers

One of the first things I heard from the Cube community was that they often received better support in chat from other community members than they did with proprietary software and a paid support plan. Across many open source communities, I find people who are motivated to help other (especially new) community members and see it as a way of giving back to the community.

You don't need permission to participate in open source communities. A good open source community isn't for developers only, and people feel there's a culture of trust and feel comfortable enough to have open discussions on chat platforms, forums, and issue trackers. This is especially important for non-developers, such as data engineers or analysts in the data analytics space.

Of course, with open source software, there's the ability to see and contribute directly to the codebase to fix bugs or add new features. Using an example from the Cube community, GraphQL support was one of our highlights last year, and our community members contributed to this feature.

There are plenty of benefits to an active community. Even in cases where the vendor cannot release a fix in a timely manner, you can still make the changes yourself and own the runtime while you wait for an "official" fix. Community members and users also don't like being locked in to a vendor's whims, and there's no pressure to upgrade when using open source software. 

Open source communities leave many "bread crumbs" in different tools like GitLab, GitHub, Codeberg, YouTube, and so on, making it a lot easier to gauge not just the volume of activities but also the level of community engagement and culture. So even before trying out the software, you can get a good sense of the community's health (and, by extension, the company) before deciding if this is a technology you want to invest in.

[ Related read How we track the community health of our open source project ]

More on data science What is data science? What is Python? How to become a data scientist What is big data? Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Latest data science articles Benefits of open source for the company

There's no better way to lower the barrier to adoption of your software than being open source. Early on, this helps grow adoption among the technical audience. Early adopters then often become some of your most loyal fans for years to come.

Early adopters are also catalysts for speeding up your development. Their feedback on your product and feature requests (for instance on your issue trackers) will provide insight into real-world use cases. In addition, many of the open source enthusiasts participate in co-development efforts (for example, on your repositories) for new features or bug fixes. Needless to say, this is precious for companies in the early days when there is a shortage of resources in development and product teams.

As you tend to your community, you will help it grow and diversify. Increased diversity isn't just in demographics or geography. You want users from new industries, or users with different job titles. Using the Cube community as an example, I mostly talked to application developers a year ago, but now I’m meeting with more people that are data consumers or users.

The collaborative culture in good open source communities lowers the barrier to entry not just for developers but also for others who want to ask questions, share their ideas or make other non-technical contributions. You get better access to diverse perspectives as your company and community grows.

Being open source makes it easy to collaborate with other vendors and communities, not just with individual community members. For example, if you want to work with another vendor on a database driver or integration, it's a lot simpler when you can just collaborate across open source repositories.

Community matters

All these benefits lead to lowering the barriers to entry for using your software and collaboration. The open source model will not only help individual software or companies, but it can help accelerate the growth of our entire ecosystem and the industry. I hope to see more open source companies and communities in the data analytics space and for all of us to continue this journey.

Open source is critical in data analysis while providing long-term benefits for the users, community members, and business.

Image by:

Opensource.com

Data Science Community management What to read next Try this new open source tool for data analytics This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages