opensource.com

Subscribe to opensource.com feed
Updated: 2 hours 15 min ago

Use autoloading and namespaces in PHP

Tue, 04/18/2023 - 15:00
Use autoloading and namespaces in PHP daggerhart Tue, 04/18/2023 - 03:00

In the PHP language, autoloading is a way to automatically include class files of a project in your code. Say you had a complex object-oriented PHP project with more than a hundred PHP classes. You'd need to ensure all your classes were loaded before using them. This article aims to help you understand the what, why, and how of autoloading, along with namespaces and the use keyword, in PHP.

What is autoloading?

In a complex PHP project, you're probably using hundreds of classes. Without autoloading, you'd likely have to include every class manually. Your code would look like this:

<?php // manually load every file the whole project uses require_once __DIR__.'/includes/SomeClass1.php'; require_once __DIR__.'/includes/SomeClass2.php'; require_once __DIR__.'/includes/SomeClass3.php'; require_once __DIR__.'/includes/SomeClass4.php'; require_once __DIR__.'/includes/SomeClass5.php'; require_once __DIR__.'/includes/SomeClass6.php'; require_once __DIR__.'/includes/SomeClass7.php'; require_once __DIR__.'/includes/SomeClass8.php'; require_once __DIR__.'/includes/SomeClass9.php'; require_once __DIR__.'/includes/SomeClass10.php'; // ... a hundred more times $my_example = new SomeClass1();

This is tedious at best and unmaintainable at worst.

What if, instead, you could have PHP automatically load class files when you need it? You can, with autoloading.

PHP autoloading 101

It only takes two steps to create an autoloader.

  1. Write a function that looks for files that need to be included.
  2. Register that function with the spl_autoload_register() core PHP function.

Here's how to do that for the above example:

<?php /** * Simple autoloader * * @param $class_name - String name for the class that is trying to be loaded. */ function my_custom_autoloader( $class_name ){ $file = __DIR__.'/includes/'.$class_name.'.php'; if ( file_exists($file) ) { require_once $file; } } // add a new autoloader by passing a callable into spl_autoload_register() spl_autoload_register( 'my_custom_autoloader' ); $my_example = new SomeClass1(); // this works!

There you go. You no longer have to manually require_once every single class file in the project. Instead, with your autoloader, the system automatically requires a file as its class is used.

For a better understanding of what's going on here, walk through the exact steps in the above code:

  1. The function my_custom_autoloader expects one parameter called $class_name. Given a class name, the function looks for a file with that name and loads that file.
     
  2. The spl_autoload_register() function in PHP expects one callable parameter. A callable parameter can be many things, such as a function name, class method, or even an anonymous function. In this case, it's a function named my_custom_autoloader.
     
  3. The code is therefore able to instantiate a class named SomeClass1 without first having required its PHP file.

So what happens when this script is run?

  1. PHP realizes that there's not yet a class named SomeClass1 loaded, so it executes registered autoloaders.
     
  2. PHP executes the custom autoload function (my_custom_autoloader), and it passes in the string SomeClass1 as the value for $class_name.
     
  3. The custom function defines the file as $file = __DIR__.'/includes/SomeClass1.php';, looks for its existence (file_exists()), then (as long as the file is found) marks it as required with require_once __DIR__.'/includes/SomeClass1.php';. As a result, the class's PHP file is automatically loaded.

Huzzah! You now have a very simple autoloader that automatically loads class files as those classes are instantiated for the first time. In a moderately sized project, you've saved yourself from writing hundreds of lines of code.

What are PHP namespaces?

Namespaces are a way to encapsulate like functionalities or properties. An easy (and practical) analog is an operating system's directory structure. The file foo.txt can exist in both the directory /home/greg and in /home/other, but two copies of foo.txt cannot coexist in the same directory.

In addition, to access the foo.txt file outside of the /home/greg directory, you must prepend the directory name to the file name using the directory separator to get /home/greg/foo.txt.

You define a namespace at the top of a PHP file using the namespace keyword:

<?php namespace Jonathan; $a = 'The quick brown fox...'; function do_something() { echo "this function does a thing"; }

In the above example, I've encapsulated both the $a variable, and the do_something() function within the namespace of Jonathan. This implies a number of things—most importantly, neither of those things conflicts with variables or functions of the same name in the global scope.

For example, say you have the above code in its own file named jonathan-stuff.php. In a separate file, you have this:

<?php require_once "jonathan-stuff.php"; $a = 'hello world'; function do_something(){ echo "this function does a completely different thing"; } echo $a; // hello world do_something(); // this function does a completely different thing

No conflict. You have two functions named do_something(), and two variables named $a, and they are able to co-exist with one another.

Now all you have to do is figure out how to access the namespaced variables and methods. This is done with a syntax very similar to a directory structure, with backslashes:

<?php echo \Jonathan\$a; // The quick brown fox... \Jonathan\do_something(); // this function does a thing

This code echoes the value of the variable $a residing within the Jonathan namespace. It also executes the function named do_something() residing within the Jonathan namespace.

This method is also (and more commonly) used with classes. For example:

?php namespace Jonathan; class SomeClass { }

This can be instantiated like so:

<?php $something = new \Jonathan\SomeClass();

With namespaces, very large projects can contain many classes that share the same name without any conflicts. Pretty sweet, right?

What problems do namespaces solve?

To see the benefits namespaces provide, you have only to look back in time to a PHP without namespaces. Before PHP version 5.3, you couldn't encapsulate classes, so they were always at risk of conflicting with another class of the same name. It was (and still is, to some degree) not uncommon to prefix class names:

<?php class Jonathan_SomeClass { }

As you can imagine, the larger the code base, the more classes, and the longer the prefixes. Don't be surprised if you open an old PHP project some time and find a class name more than 60 characters long, like:

<?php class Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator { }

What's the difference between writing a long class name like that and writing a long class name like \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator? That's a great question, and the answer lies in the ease of using that class more than once in a given context. Imagine you had to make use of a long class name multiple times within a single PHP file. Currently, you have two ways of doing this.

Without namespaces:

<?php class Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator { } $a = new Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator(); $b = new Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator(); $c = new Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator(); $d = new Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator(); $e = new Jonathan_SomeEntity_SomeBundle_SomeComponent_Validator();

Oof, that's a lot of typing. Here it is with a namespace:

<?php namespace Jonathan\SomeEntity\SomeBundle\SomeComponent; class Validator { }

Elsewhere in the code:

<?php $a = new \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator(); $b = new \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator(); $c = new \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator(); $d = new \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator(); $e = new \Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator();

That certainly isn't much better. Luckily, there's a third way. You can leverage the use keyword to pull in a namespace.

The use keyword

The use keyword imports a given namespace into the current context. This allows you to make use of its contents without having to refer to its full path every time you use it.

<?php namespace Jonathan\SomeEntity\SomeBundle\SomeComponent; class Validator { }

Now you can do this:

<?php use Jonathan\SomeEntity\SomeBundle\SomeComponent\Validator; $a = new Validator(); $b = new Validator(); $c = new Validator(); $d = new Validator(); $e = new Validator();

Aside from encapsulation, importing is the real power of namespaces.

Now that you have an idea of what both autoloading and namespaces are, you can combine them to create a reliable means of organizing your project files.

PSR-4: The standard for PHP autoloading and namespaces

PHP Standard Recommendation (PSR) 4 is a commonly used pattern for organizing a PHP project so that the namespace for a class matches the relative file path to the file of that class.

For example, you're working in a project that makes use of PSR-4 and you have a namespaced class called \Jonathan\SomeBundle\Validator();. You can be sure the file for that class can be found in this relative location in the file system: /Jonathan/SomeBundle/Validator.php.

Just to drive this point home, here are more examples of where a PHP file exists for a class within a project making use of PSR-4:

  • Namespace and class: \Project\Fields\Email\Validator()
    • File location: /Project/Fields/Email/Validator.php
       
  • Namespace and class: \Acme\QueryBuilder\Where
    • File location: /Acme/QueryBuilder/Where.php
       
  • Namespace and class: \MyFirstProject\Entity\EventEmitter
    • File location: /MyFirstProject/Entity/EventEmitter.php

This isn't actually 100% accurate. Each component of a project has its own relative root, but don't discount this information: Knowing that PSR-4 implies the file location of a class helps you easily find any class within a large project.

How does PSR-4 work?

PSR-4 works because it's achieved with an autoloader function. Take a look at one PSR-4 example autoloader function:

<?php spl_autoload_register( 'my_psr4_autoloader' ); /** * An example of a project-specific implementation. * * @param string $class The fully-qualified class name. * @return void */ function my_psr4_autoloader($class) { // replace namespace separators with directory separators in the relative // class name, append with .php $class_path = str_replace('\\', '/', $class); $file = __DIR__ . '/src/' . $class_path . '.php'; // if the file exists, require it if (file_exists($file)) { require $file; } }

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Register for your free Red Hat account

Now assume you've just instantiated the new \Foo\Bar\Baz\Bug(); class.

  1. PHP executes the autoloader with the $class parameter using the string value $class = "\Foo\Bar\Baz\Bug".
     
  2. Use str_replace() to change all backslashes into forward slashes (like most directory structures use), turning the namespace into a directory path.
     
  3. Look for the existence of that file in the location /src/Foo/Bar/Baz/Bug.php.
     
  4. If the file is found, load it.

In other words, you change Foo\Bar\Baz\Bug to /src/Foo/Bar/Baz/Bug.php then locate that file.

Composer and autoloading

Composer is a command-line PHP package manager. You may have seen a project with a composer.json file in its root directory. This file tells Composer about the project, including the project's dependencies.

Here's an example of a simple composer.json file:

{ "name": "jonathan/example", "description": "This is an example composer.json file", "require": { "twig/twig": "^1.24" } }

This project is named "jonathan/example" and has one dependency: the Twig templating engine (at version 1.24 or higher).

With Composer installed, you can use the JSON file to download the project's dependencies. In doing so, Composer generates an autoload.php file that automatically handles autoloading the classes in all of your dependencies.

Image by:

(Jonathan Daggerheart, CC BY-SA 4.0)

If you include this new file in a project, all classes within your dependency are automatically loaded, as needed.

PSR makes PHP better

Because of the PSR-4 standard and its widespread adoption, Composer can generate an autoloader that automatically handles loading your dependencies as you instantiate them within your project. The next time you write PHP code, keep namespaces and autoloading in mind.

PHP autoloading and namespaces provide handy conveniences with huge benefits.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Drupal Web development PHP What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Talk to your cluster with this open source Python API wrapper

Tue, 04/18/2023 - 15:00
Talk to your cluster with this open source Python API wrapper rnetser1 Tue, 04/18/2023 - 03:00

Open source projects that create a wrapper around an API are becoming increasingly popular. These projects make it easier for developers to interact with APIs and use them in their applications. The openshift-python-wrapper project is a wrapper around openshift-restclient-python. What began as an internal package to help our team work with the OpenShift API became an open source project (Apache License 2.0).

This article discusses what an API wrapper is, why it's useful, and some examples from the wrapper.

Why use an API wrapper?

An API wrapper is a layer of code that sits between an application and an API. It simplifies the API access process by abstracting away some of the complexities involved in making requests and parsing responses. A wrapper can also provide additional functionality beyond what the API itself offers, such as caching or error handling.

Using an API wrapper makes the code more modular and easier to maintain. Instead of writing custom code for every API, you can use a wrapper that provides a consistent interface for interacting with APIs. It saves time, avoids code duplications, and reduces the chance of errors.

Another benefit of using an API wrapper is that it can shield your code from changes to the API. If an API changes its interface, you can update the wrapper code without modifying your application code. This can reduce the work required to maintain your application over time.

Install

The application is on PyPi, so install openshift-python-wrapper using the pip command:

$ python3 -m pip install openshift-python-wrapperPython wrapper

The OpenShift REST API provides programmatic access to many of the features of the OpenShift platform. The wrapper offers a simple and intuitive interface for interacting with the API using the openshift-restclient-python library. It standardizes how to work with cluster resources and offers unified resource CRUD (Create, Read, Update, and Delete) flows. It also provides additional capabilities, such as resource-specific functionality that otherwise needs to be implemented by users. The wrapper makes code easier to read and maintain over time.

One example of simplified usage is interacting with a container. Running a command inside a container requires using Kubernetes stream, handling errors, and more. The wrapper handles it all and provides simple and intuitive functionality.

>>> from ocp_resources.pod import Pod >>> from ocp_utilities.infra import get_client >>> client = get_client() ocp_utilities.infra INFO Trying to get client via new_client_from_config >>> pod = Pod(client=client, name="nginx-deployment-7fb96c846b-b48mv", namespace="default") >>> pod.execute("ls") ocp_resources Pod INFO Execute ls on nginx-deployment-7fb96c846b-b48mv (ip-10-0-155-108.ec2.internal) 'bin\nboot\ndev\netc\nhome\nlib\nlib64\nmedia\nmnt\nopt\nproc\nroot\nrun\nsbin\nsrv\nsys\ntmp\nusr\nvar\n'

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Register for your free Red Hat account Latest articles for IT architects

Developers or testers can use this wrapper—our team wrote the code while keeping testing in mind. Using Python capabilities, context managers can provide out-of-the-box resource creation and deletion, and inheritance can be used to extend functionality for specific use cases. Pytest fixtures can utilize the code for setup and teardown, leaving no leftovers. Resources can even be saved for debugging. Resource manifests and logs can be easily collected.

Here's an example of a context manager:

@pytest.fixture(scope="module") def namespace(): admin_client = get_client() with Namespace(client=admin_client, name="test-ns",) as ns: ns.wait_for_status(status=Namespace.Status.ACTIVE, timeout=240) yield ns def test_ns(namespace): print(namespace.name)

Generators iterate over resources, as seen below:

>>> from ocp_resources.node import Node >>> from ocp_utilities.infra import get_client >>> admin_client = get_client() # This returns a generator >>> for node in Node.get(dyn_client=admin_client): print(node.name) ip-10-0-128-213.ec2.internalOpen source code for open source communities

To paraphrase a popular saying, "If you love your code, set it free." The openshift-python-wrapper project started as utility modules for OpenShift Virtualization. As more and more projects benefitted from the code, we decided to extract those utilities into a separate repository and have it open sourced. To paraphrase another common saying, "If the code does not return to you, it means it was never yours." We like saying that once that happens, it's truly open source.

More contributors and maintainers mean that the code belongs to the community. Everyone is welcome to contribute.

Combine the power of an open API and the Python programming language.

Image by:

Ron on Flickr. CC BY-NC-SA 2.0

Python Cloud Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 33 points Open Enthusiast Author Register or Login to post a comment.

Run a distributed database on the cloud

Mon, 04/17/2023 - 15:00
Run a distributed database on the cloud wuweijie Mon, 04/17/2023 - 03:00

Apache ShardingSphere is an open source distributed database toolkit. It enhances any database with data sharding, elastic scaling, encryption, and many other capabilities. Deploying and maintaining ShardingSphere-Proxy clusters and load balancing manually can be labor-intensive and time-consuming. To address this issue, Apache ShardingSphere offers ShardingSphere on Cloud, a collection of cloud-based solutions.

ShardingSphere-on-Cloud includes automated deployment scripts to virtual machines in cloud environments. It also includes tools for a Kubernetes cloud-native environment and a variety of hands-on content for high availability, observability, security policy compliance, and more. This includes Helm Charts, an Operator, and automatic horizontal scaling.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Register for your free Red Hat account Latest articles for IT architects

The new cloud project provides the following capabilities:

  • Helm Charts-based ShardingSphere-Proxy for one-click deployment in Kubernetes environments.
  • Operator-based ShardingSphere-Proxy for one-click deployment and automated maintenance in Kubernetes environments.
  • Amazon Web Services (AWS) CloudFormation-based ShardingSphere-Proxy for rapid deployment.
  • Terraform-based rapid deployment of ShardingSphere-Proxy in AWS environments.

This article demonstrates one of the fundamental capabilities of ShardingSphere on Cloud: One-click deployment of ShardingSphere-Proxy clusters in Kubernetes using Helm Charts.

  1. Use the following three-line command to create a three-node ShardingSphere-Proxy cluster within a Kubernetes cluster with the default configuration and serve it through the Service:

    $ helm repo add shardingsphere https://apache.github.io/shardingsphere-on-cloud $ helm repo update $ helm install shardingsphere-proxy shardingsphere/apache-shardingsphere-proxy-charts -n shardingsphere
Image by:

(Wu Weijie, CC BY-SA 4.0)

  1. The application can access the ShardingSphere-Proxy cluster through the svc domain:

    $ kubectl run mysql-client --image=mysql:5.7.36 \ --image-pull-policy=IfNotPresent -- sleep 300 $ kubectl exec -i -t mysql-client -- mysql \ -h shardingsphere-proxy-apache-shardingsphere-proxy.shardingsphere.svc.cluster.local \ -P3307 -uroot -proot
Image by:

(Wu Weijie, CC BY-SA 4.0)

It's as easy as that. You're running ShardingSphere on the cloud, and that's just the beginning. For more advanced features, refer to the official ShardingSphere-on-Cloud documentation.

This article is adapted from A Distributed Database Load Balancing Architecture Based on ShardingSphere: Demo and User Case and is republished with permission.

Create a distributed database cluster with Kubernetes in two easy steps.

Image by:

Opensource.com

Kubernetes Databases Cloud What to read next A distributed database load-balancing architecture with ShardingSphere This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Preserving the open web through Drupal

Mon, 04/17/2023 - 15:00
Preserving the open web through Drupal Dries Mon, 04/17/2023 - 03:00

Just because I share content online doesn't mean I want to share control over it.

My website is a perfect example of what I mean. I take photos nearly everywhere I go: To date, I have more than 10,000 photos uploaded to my Drupal site. Using something like Instagram might be easier, but my photos are precious to me, which is why I feel so strongly about preserving the open web.

There are many reasons proprietary platforms don't meet my requirements for sharing. First, I like to own my data. If you think back to early social media sites like MySpace, they infamously lost massive amounts of user data. Artists lost their music. People lost their photos. This sort of thing still happens on Facebook and other social media sites.

[ Related read: How to switch from Twitter to Mastodon ]

Second, I don't like how proprietary platforms limit my creative freedom. Websites built on proprietary platforms often use templates, which makes them all look the same. Similar trends are happening in the design world — minimalist design has led to the death of detail. From doorbells to templated websites, unique design elements have been eliminated in favor of safe neutrals in an attempt to please everyone. This trend strips the personality right out from under websites, which used to be highly personal. Remember when people's early internet homepages actually felt like their digital homes?

Finally, I don't like how proprietary platforms treat my friends and family. People get tracked every time you upload a photo to a social media site. They're more like social monetization sites that make you scroll for hours, only to target you with ads. The problem got so bad that even Apple added tracking protection to WebKit in its Safari browser. Now, many social sites have their own in-app browsers as a workaround, so they can still track you.

I want the open web to win. That's why the ongoing enhancements to Drupal are such good news.

A world where good software wins

A few decades ago, the web was about getting information. Today, it's ingrained in every aspect of daily life. People can pay bills, work, socialize, get healthcare—and still get information. As more and more activities take place on the web, there's an even greater responsibility to ensure that the open web is inclusive of every person and accounts for everyone's safety. When people are excluded from accessing online experiences, they're cut out of rewarding careers, independent lifestyles, and the social interactions and friendships that bring people together. That's why good software and open source projects like Drupal are so important to protecting and growing the open web.

Good software is open, flexible, pro-privacy, secure, and doesn't lock you in. Good software lets you control your own code and your own data. It lets you prioritize what's important to you, whether that's accessibility, privacy, or something else. You're fully in control of your own destiny.

Good software also cares about end users. To that end, the Drupal community has been working on making software that solves people's problems and brings even more users into the community. This will be our priority now and in the future. We want to build a world where good software wins.

Opening up Drupal to more users with improved composability

Many people and organizations around the world want to build ambitious web experiences that are secure, scalable, and mindful of privacy—differentiated experiences that lead with accessibility. Drupal is for them, in part because of its composability.

Composability is one of the hottest trends in the technology market right now. Composability starts with software that's modular and made up of components. With more than 40,000 modules, Drupal meets that requirement. However, there's so much more to composability than an architecture for developers. Composability is a new way of doing business.

It's about providing low-code or no-code tools that enable business users to participate in digital experience building. Even non-technical users can do this from Drupal's user interface. Layout Builder, a visual design tool, helps content editors and site builders easily and quickly create visual layouts for displaying various content types. Composability is also about decoupling the front end from the back end and allowing users to push content and data to different touchpoints on the web. Drupal has been investing in headless and decoupled architectures for almost a decade and now ships with headless capabilities out of the box.

Finally, because composability offers limitless possibilities to combine modules, it can easily get complicated. You need a good way to search for modules and update sites to manage dependencies and versions. Drupal uses a tool called Project Browser to make it easier for site builders to browse innovative modules built by the community. Distributions and recipes allow users to bundle modules and configurations together for reusable, prepackaged business solutions.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Register for your free Red Hat account Check out more cheat sheets

If you haven't checked out Drupal in a while, I recommend you see it for yourself. The latest version, Drupal 10, recently shipped with even more innovations including:

  • Modernized front-end experience (Olivero theme)
  • Modernized back-end experience (Claro theme)
  • An improved content editing experience (CKEditor 5)
  • Improved developer experience built on Symfony 6.2 with support for PHP 8.2
  • A new theme generator
  • And more
Building a better web for the future

The launch of Drupal 10 comes at a good time. There's so much turmoil happening within the top social networking sites. If nothing else, it's a good reminder that preserving and promoting an open web is more important than ever before. Open source, the IndieWeb, the Fediverse, decentralized social media platforms, and even RSS are all seeing newfound appreciation and adoption as people realize the drawbacks of communicating and collaborating over proprietary platforms.

An open web means opportunity for all. And open source gives everyone the freedom to understand how their software works, to collectively improve it, and to build the web better for future generations. The main focus of Drupal 10 was to bring even more site builders to Drupal. In that way, Drupal will help extend the open web's reach and protect its long-term well-being for years to come.

Drupal updates create opportunities for everyone to participate in the open web.

Image by:

Opensource.com

Drupal Internet What to read next An open source project that opens the internet for all How Wikipedia helps keep the internet open Customize your internet with an open source search engine This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Edit your photos with open source artificial intelligence

Sat, 04/15/2023 - 15:00
Edit your photos with open source artificial intelligence Don Watkins Sat, 04/15/2023 - 03:00

I've been interested in photography ever since I co-opted my father's Kodak 620 camera as a young boy. I used it to take pictures of the flora and fauna of our neighborhood. My love of photography led me to an Instamatic camera in high school, and eventually to digital cameras as they entered the marketplace in the late 1990s. Early digital cameras provided portability and the ability to quickly capture and easily share images on the internet. But they lacked the quality and complexity of the best of film photography. Of course digital cameras have improved a lot since then. But I have years of digital photographs that just look a little, well, little on modern devices.

Until recently, my go-to tool for upscaling digital images has been GIMP. A couple of years ago, I tried to use GIMP to upscale a thumbnail image of my father that was taken in the mid-1940s. It worked, but the photo lacked the detail, depth, and clarity that I wanted.

That's all changed since I learned about Upscayl, a free and open source program that uses open source artificial intelligence to upscale low-resolution images.

Upscayl

Upscayl works on Linux, Windows, and macOS.

It's easy to install on Linux whether your system uses RPM or DEB packages, and its website contains a universal Linux AppImage too.

For macOS and Windows, you can download installers from the project's website. Upscayl is released with an AGPL license.

Get started with Upscayl

Once installed, you can begin upscaling your images. The GUI software is very easy to use. The software makes your old images look like they were taken yesterday with image resolutions that far exceed the originals. In addition, you can batch scale entire folders and photo albums of images and upscale them all at once.

Image by:

(Don Watkins, CC BY-SA 4.0)

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Register for your free Red Hat account Latest audio and music articles Latest video editing articles

Launch the software and click the Select Image button. Find the image or folder of images you want to upscale.

Once the image is loaded, select the type of upscaling you want to try. The default is Real-ESRGAN, and that's a good place to start. There are six options to choose from, including a selection for digital art.

Next, select the output directory where you want your upscaled images to be saved.

And finally, click the Upscayl button to begin the upscaling process. The speed of conversion depends on your GPU and the image output choice you make.

Here's a test image, with the low-resolution image on the left and the Upscayl version on the right:

Image by:

(Derived from Jurica Koletić, Unsplash License)

Time to try Upscayl for your images

Upscayl is one of my favorite upscaling applications. It does depend heavily on your GPU, so it may not work on an old computer or one with an especially weak graphics card. But there's no harm in giving it a try. So download it and try it. I think you'll be impressed with the results.

Upscayl is a free and open source program that uses open source artificial intelligence to upscale low-resolution images.

AI and machine learning Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A distributed database load-balancing architecture with ShardingSphere

Fri, 04/14/2023 - 15:00
A distributed database load-balancing architecture with ShardingSphere wuweijie Fri, 04/14/2023 - 03:00

Apache ShardingSphere is a distributed database ecosystem that transforms any database into a distributed database and enhances it with data sharding, elastic scaling, encryption, and other capabilities. In this article, I demonstrate how to build a distributed database load-balancing architecture based on ShardingSphere and the impact of introducing load balancing.

The architecture

A ShardingSphere distributed database load-balancing architecture consists of two products: ShardingSphere-JDBC and ShardingSphere-Proxy, which can be deployed independently or in a hybrid architecture. The following is the hybrid deployment architecture:

Image by:

(Wu Weijie, CC BY-SA 4.0)

ShardingSphere-JDBC load-balancing solution

ShardingSphere-JDBC is a lightweight Java framework with additional services in the JDBC layer. ShardingSphere-JDBC adds computational operations before the application performs database operations. The application process still connects directly to the database through the database driver.

As a result, users don't have to worry about load balancing with ShardingSphere-JDBC. Instead, they can focus on how their application is load balanced.

ShardingSphere-Proxy load-balancing solution

ShardingSphere-Proxy is a transparent database proxy that provides services to clients over the database protocol. Here's ShardingSphere-Proxy as a standalone deployed process with load balancing on top of it:

Image by:

(Wu Weijie, CC BY-SA 4.0)

Load balancing solution essentials

The key point of ShardingSphere-Proxy cluster load balancing is that the database protocol itself is designed to be stateful (connection authentication status, transaction status, Prepared Statement, and so on).

If the load balancing on top of the ShardingSphere-Proxy cannot understand the database protocol, your only option is to select a four-tier load balancing proxy ShardingSphere-Proxy cluster. In this case, a specific proxy instance maintains the state of the database connection between the client and ShardingSphere-Proxy.

Because the proxy instance maintains the connection state, four-tier load balancing can only achieve connection-level load balancing. Multiple requests for the same database connection cannot be polled to multiple proxy instances. Request-level load balancing is not possible.

This article does not cover the details of four- and seven-tier load balancing.

Recommendations for the application layer

Theoretically, there is no functional difference between a client connecting directly to a single ShardingSphere-Proxy or a ShardingSphere-Proxy cluster through a load-balancing portal. However, there are some differences in the technical implementation and configuration of the different load balancers.

For example, in the case of a direct connection to ShardingSphere-Proxy with no limit on the total time a database connection session can be held, some Elastic Load Balancing (ELB) products have a maximum session hold time of 60 minutes at Layer 4. If an idle database connection is closed by a load balancing timeout, but the client is not aware of the passive TCP connection closure, the application may report an error.

Therefore, in addition to considerations at the load balancing level, you might consider measures for the client to avoid the impact of introducing load balancing.

On-demand connection creation

If a connection's instance is created and used continuously, the database connection will be idle most of the time when executing a timed job with a one-hour interval and a short execution time. When a client itself is unaware of changes in the connection state, the long idle time increases the uncertainty of the connection state. For scenarios with long execution intervals, consider creating connections on demand and releasing them after use.

Connection pooling

General database connection pools have the ability to maintain valid connections, reject failed connections, and so on. Managing database connections through connection pools can reduce the cost of maintaining connections yourself.

Enable TCP KeepAlive

Clients generally support TCP KeepAlive configuration:

  • MySQL Connector/J supports autoReconnect or tcpKeepAlive, which are not enabled by default.
  • The PostgreSQL JDBC Driver supports tcpKeepAlive, which is not enabled by default.

Nevertheless, there are some limitations to how TCP KeepAlive can be enabled:

  • The client does not necessarily support the configuration of TCP KeepAlive or automatic reconnection.
  • The client does not intend to make any code or configuration adjustments.
  • TCP KeepAlive is dependent on the operating system implementation and configuration.
User case

Recently, a ShardingSphere community member provided feedback that their ShardingSphere-Proxy cluster was providing services to the public with upper-layer load balancing. In the process, they found problems with the connection stability between their application and ShardingSphere-Proxy.

Problem description

Assume the user's production environment uses a three-node ShardingSphere-Proxy cluster serving applications through a cloud vendor's ELB.

Image by:

(Wu Weijie, CC BY-SA 4.0)

One of the applications is a resident process that executes timed jobs, which are executed hourly and have database operations in the job logic. The user feedback is that each time a timed job is triggered, an error is reported in the application log:

send of 115 bytes failed with errno=104 Connection reset by peer Checking the ShardingSphere-Proxy logs, there are no abnormal messages.

The issue only occurs with timed jobs that execute hourly. All other applications access ShardingSphere-Proxy normally. As the job logic has a retry mechanism, the job executes successfully after each retry without impacting the original business.

Problem analysis

The reason why the application shows an error is clear—the client is sending data to a closed TCP connection. The troubleshooting goal is to identify exactly why the TCP connection was closed.

If you encounter any of the three reasons listed below, I recommend that you perform a network packet capture on both the application and the ShardingSphere-Proxy side within a few minutes before and after the point at which the problem occurs:

  • The problem will recur on an hourly basis.
  • The issue is network related.
  • The issue does not affect the user's real-time operations.
Packet capture phenomenon 1

ShardingSphere-Proxy receives a TCP connection establishment request from the client every 15 seconds. The client, however, sends an RST to the proxy immediately after establishing the connection with three handshakes. The client sends an RST to the proxy without any response after receiving the Server Greeting or even before the proxy has sent the Server Greeting.

Image by:

(Wu Weijie, CC BY-SA 4.0)

However, no traffic matching the above behavior exists in the application-side packet capture results.

By consulting the community member's ELB documentation, I found that the above network interaction is how that ELB implements the four-layer health check mechanism. Therefore, this phenomenon is not relevant to the problem in this case.

Image by:

(Wu Weijie, CC BY-SA 4.0)

Packet capture phenomenon 2

The MySQL connection is established between the client and the ShardingSphere-Proxy, and the client sends an RST to the proxy during the TCP connection disconnection phase.

Image by:

(Wu Weijie, CC BY-SA 4.0)

The above packet capture results reveal that the client first initiated the COM_QUIT command to ShardingSphere-Proxy. The client disconnected the MySQL connection based on but not limited to the following possible scenarios:

  • The application finished using the MySQL connection and closed the database connection normally.
  • The application's database connection to ShardingSphere-Proxy is managed by a connection pool, which performs a release operation for idle connections that have timed out or have exceeded their maximum lifetime. As the connection is actively closed on the application side, it does not theoretically affect other business operations unless there is a problem with the application's logic.

After several rounds of packet analysis, no RSTs had been sent to the client by the ShardingSphere-Proxy in the minutes before and after the problem surfaced.

Based on the available information, it's possible that the connection between the client and ShardingSphere-Proxy was disconnected earlier, but the packet capture time was limited and did not capture the moment of disconnection.

Because the ShardingSphere-Proxy itself does not have the logic to actively disconnect the client, the problem is being investigated at both the client and ELB levels.

More on Java What is enterprise Java programming? An open source alternative to Oracle JDK Java cheat sheet Red Hat build of OpenJDK Free online course: Developing cloud-native applications with microservices Fresh Java articles Client application and ELB configuration check

The user feedback included the following additional information:

  • The application's timed jobs execute hourly, the application does not use a database connection pool, and a database connection is manually maintained and provided for ongoing use by the timed jobs.
  • The ELB is configured with four levels of session hold and a session idle timeout of 40 minutes.

Considering the frequency of execution of timed jobs, I recommend that users modify the ELB session idle timeout to be greater than the execution interval of timed jobs. After the user changed the ELB timeout to 66 minutes, the connection reset problem no longer occurred.

If the user had continued packet capturing during troubleshooting, it's likely they would have found ELB traffic that disconnects the TCP connection at the 40th minute of each hour.

Problem conclusion

The client reported an error Connection reset by peer Root cause.

The ELB idle timeout was less than the timed task execution interval. The client was idle for longer than the ELB session hold timeout, resulting in the connection between the client and ShardingSphere-Proxy being disconnected by the ELB timeout.

The client sent data to a TCP connection that had been closed by the ELB, resulting in the error Connection reset by peer.

Timeout simulation experiment

I decided to conduct a simple experiment to verify the client's performance after a load-balancing session timeout. I performed a packet capture during the experiment to analyze network traffic and observe the behavior of load-balancing.

Build a load-balanced ShardingSphere-Proxy clustered environment

Theoretically, this article could cover any four-tier load-balancing implementation. I selected Nginx.

I set the TCP session idle timeout to one minute, as seen below:

user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { upstream shardingsphere { hash $remote_addr consistent; server proxy0:3307; server proxy1:3307; } server { listen 3306; proxy_timeout 1m; proxy_pass shardingsphere; } }Construct a Docker compose file

Here's a Docker compose file:

version: "3.9" services: nginx: image: nginx:1.22.0 ports: - 3306:3306 volumes: - /path/to/nginx.conf:/etc/nginx/nginx.conf proxy0: image: apache/shardingsphere-proxy:5.3.0 hostname: proxy0 ports: - 3307 proxy1: image: apache/shardingsphere-proxy:5.3.0 hostname: proxy1 ports: - 3307 Startup environment

Start the containers:

$ docker compose up -d [+] Running 4/4 ⠿ Network lb_default Created 0.0s ⠿ Container lb-proxy1-1 Started 0.5s ⠿ Container lb-proxy0-1 Started 0.6s ⠿ Container lb-nginx-1 StartedSimulation of client-side same-connection-based timed tasks

First, construct a client-side deferred SQL execution. Here, the ShardingSphere-Proxy is accessed through Java and MySQL Connector/J.

The logic:

  1. Establish a connection to the ShardingSphere-Proxy and execute a query to the proxy.
  2. Wait 55 seconds and then execute another query to the proxy.
  3. Wait 65 seconds and then execute another query to the proxy.
public static void main(String[] args) { try (Connection connection = DriverManager.getConnection("jdbc:mysql://127.0.0.1:3306?useSSL=false", "root", "root"); Statement statement = connection.createStatement()) { log.info(getProxyVersion(statement)); TimeUnit.SECONDS.sleep(55); log.info(getProxyVersion(statement)); TimeUnit.SECONDS.sleep(65); log.info(getProxyVersion(statement)); } catch (Exception e) { log.error(e.getMessage(), e); } } private static String getProxyVersion(Statement statement) throws SQLException { try (ResultSet resultSet = statement.executeQuery("select version()")) { if (resultSet.next()) { return resultSet.getString(1); } } throw new UnsupportedOperationException(); }

Expected and client-side run results:

  1. A client connects to the ShardingSphere-Proxy, and the first query is successful.
  2. The client's second query is successful.
  3. The client's third query results in an error due to a broken TCP connection because the Nginx idle timeout is set to one minute.

The execution results are as expected. Due to differences between the programming language and the database driver, the error messages behave differently, but the underlying cause is the same: Both TCP connections have been disconnected.

The logs are shown below:

15:29:12.734 [main] INFO icu.wwj.hello.jdbc.ConnectToLBProxy - 5.7.22-ShardingSphere-Proxy 5.1.1 15:30:07.745 [main] INFO icu.wwj.hello.jdbc.ConnectToLBProxy - 5.7.22-ShardingSphere-Proxy 5.1.1 15:31:12.764 [main] ERROR icu.wwj.hello.jdbc.ConnectToLBProxy - Communications link failure The last packet successfully received from the server was 65,016 milliseconds ago. The last packet sent successfully to the server was 65,024 milliseconds ago. at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1201) at icu.wwj.hello.jdbc.ConnectToLBProxy.getProxyVersion(ConnectToLBProxy.java:28) at icu.wwj.hello.jdbc.ConnectToLBProxy.main(ConnectToLBProxy.java:21) Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure The last packet successfully received from the server was 65,016 milliseconds ago. The last packet sent successfully to the server was 65,024 milliseconds ago. at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:581) at com.mysql.cj.protocol.a.NativeProtocol.checkErrorMessage(NativeProtocol.java:761) at com.mysql.cj.protocol.a.NativeProtocol.sendCommand(NativeProtocol.java:700) at com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:1051) at com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:997) at com.mysql.cj.NativeSession.execSQL(NativeSession.java:663) at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1169) ... 2 common frames omitted Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:67) at com.mysql.cj.protocol.a.SimplePacketReader.readHeaderLocal(SimplePacketReader.java:81) at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:63) at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:45) at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:52) at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:41) at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:54) at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:44) at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:575) ... 8 common frames omittedPacket capture results analysis

The packet capture results show that after the connection idle timeout, Nginx simultaneously disconnects from the client and the proxy over TCP. However, the client is not aware of this, so Nginx returns an RST after sending the command.

After the Nginx connection idle timeout, the TCP disconnection process with the proxy completes normally. The proxy is unaware when the client sends subsequent requests using the disconnected connection.

Analyze the following packet capture results:

  • Numbers 1–44 are the interaction between the client and the ShardingSphere-Proxy to establish a MySQL connection.
  • Numbers 45–50 are the first query performed by the client.
  • Numbers 55–60 are the second query executed by the client 55 seconds after the first query is executed.
  • Numbers 73–77 are the TCP connection disconnection processes initiated by Nginx to both the client and ShardingSphere-Proxy after the session times out.
  • Numbers 78–79 are the third query executed 65 seconds after the client executes the second query, including the Connection Reset.
Image by:

(Wu Weijie, CC BY-SA 4.0)

Wrap up

Troubleshooting disconnection issues involves examining both the ShardingSphere-Proxy settings and the configurations enforced by the cloud service provider's ELB. It's useful to capture packets to understand when particular events—especially DST messages—occur compared to idle time and timeout settings.

The above implementation and troubleshooting scenario is based on a specific ShardingSphere-Proxy deployment. For a discussion of cloud-based options, see my followup article. ShardingSphere on Cloud offers additional management options and configurations for a variety of cloud service provider environments.

This article is adapted from A Distributed Database Load Balancing Architecture Based on ShardingSphere: Demo and User Case and is republished with permission.

Load balance your distributed database the right way.

Image by:

Einfach-Eve on Pixabay. CC0 Creative Commons

Databases Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 Raspberry Pi projects to do with this open source data tool

Fri, 04/14/2023 - 15:00
5 Raspberry Pi projects to do with this open source data tool Anais Fri, 04/14/2023 - 03:00

Tiny computers are practically out-sized by their own potential. There's seemingly nothing they can't do, regardless of your industry or interest. For instance, did you know you could use a Raspberry Pi or Arduino to help you keep plants alive or to assist you in making tasty beer and barbecue?

Over the years, my team at the open source data platform InfluxDB has realized that professional and novice developers can combine a Pi or Arduino with InfluxDB for some unique do-it-yourself projects.

This article explores five exciting things you can do with a Raspberry Pi or Arduino and InfluxDB, whether you're a seasoned developer or a beginner. Hopefully, these ideas inspire you (and maybe offer some laughs) ahead of your next tiny computer and InfluxDB project.

1. Weather and environment monitoring

The developer relations (DevRel) team at InfluxData created the Plant Buddy application, which monitors the temperature, humidity, soil moisture, and light for to help users' plants stay alive and thrive. This project showcases using InfluxDB as a storage backend for a Python Flask server, retrieve IoT data, and use Python Dash to visualize the data. My team used an Arduino control board for the project, but treating a Raspberry Pi as a microcontroller is also simple. You could even extend the project to create a tool that monitors any other aspect of weather and the environment.

2. BBQ monitoring

Several of my colleagues at InfluxData are grilling and smoking hobbyists. But even the most basic BBQ enthusiast knows that the key to a great rack of ribs or brisket is cooking at a low and steady temperature. A handful of our developers used Arduino to monitor their BBQ pits. One utilized the monitoring and alerting API built into InfluxDB and assigned statuses based on calculating the difference in temperature between five-minute averages. Anything below 0.02 degrees every five minutes signaled that the meat was stalling. This change produced a "warn" status, triggering an alert notification sent directly to the developer's phone. He could then wrap the meat and contain the heat, ultimately resulting in deliciously cooked meat. Once again, you could use the Arduino, Seeduino, or Raspberry Pi to monitor your BBQ pit and make you a true smoke master.

3. Aquarium monitoring

Another Influxer set up a new tropical fish tank and monitored the temperature and filter for alerts against any irregularities. He used a temperature and a flowmeter to gather time series data and write it to InfluxDB. The whole project used InfluxDB, Telegraf, Grafana, and a Raspberry Pi to automate and visualize the metrics collected from the aquarium.

4. Raspberry Pi monitoring

Templates are prepackaged InfluxDB configurations. The Raspberry Pi template enables you to monitor your Raspberry Pi Linux system to collect metrics, including:

  • Diskio input plugin
  • Mem input plugin
  • Net input plugin
  • Processes input plugin
  • Swap input plugin
  • System input plugin

These metrics quickly and easily assure users that their Raspberry Pi device is operating as expected.

5. Home brewing monitoring

Another Influxer used Raspberry Pi to monitor the fermentation process of his home-brewed beer to ensure the highest quality.

Image by:

(Anais Dotis-Georgiou, CC BY-SA 4.0)

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

He used a digital thermometer to read the temperature of his beer with a webcam, then deployed optical character recognition (OCR) to read the values. Next, he used Raspberry Pi with the open source collection agent Telegraf to send the data to InfluxDB. He also configured alerts that informed him when he needed to tend to the fermentation process and adjust the temperature of his beer-in-the-making. On a related note, I also used Telegraf to make forecasts about those temperature recordings.

Pi made more versatile with InfluxDB

Users can create exciting projects with an Arduino or Raspberry Pi and InfluxDB for use across various industries such as IoT, home automation, and data science.

These examples showcase the creativity developers display when given the right tools and an "open sandbox" to play in. With so many possibilities, the only thing limiting the fun open source tools you can create with time series data and Raspberry Pi is your own imagination.

Grow plants while having a barbecue with your homebrew beer. Do it all thanks to open hardware and InfluxDB.

Image by:

Opensource.com

Raspberry Pi Hardware Databases What to read next Brewing beer with Linux, Python, and Raspberry Pi How I monitor my greenhouse with CircuitPython and open source tools This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 reasons virtual machines still matter

Thu, 04/13/2023 - 15:00
5 reasons virtual machines still matter alansmithee Thu, 04/13/2023 - 03:00

Virtualization used to be a big deal. For a while, it was the primary way to run services in a "sandbox" environment. IT departments used virtual machines by the hundreds. And then containers happened, doing much of what a virtual machine could do with a fraction of the resources required. While container technology made virtual machines seem cumbersome, it didn't make them entirely redundant. In fact, virtualization is as useful today as ever, and here are five reasons why.

1. Distro hopping

"Distro hopping" is the term often used to describe the inability (willfully or otherwise) to choose a single distribution. Some people just love to trying a different Linux distribution every time one is released. And why not? Linux distributions are little works of art, a labor of love created by teams of passionate people from all over the world. It's fun to see what people put together.

Part of the experience of a fresh distro is the graphical install process, the very first login, and the big desktop reveal. How fast is the install? What desktop does it use? What's the wallpaper look like? How easy was it to understand and navigate? Most importantly, could this be the one even your friends still using Windows or macOS could install and love?

You can't replicate that in a container. A container is, by design, a partial image of an operating system that assumes it's already been installed. That's a big advantage of containers for the busy sysadmin, but if you're after the desktop user experience, then a virtual machine is what you want.

2. Development

Programming is hard to get right, and it's even harder to get right when you develop an application for more than just one platform. Aside from Java, few programming languages are able to target all the platforms out there. An application that launches and runs fine on Linux might render an error on Windows, and may not launch at all on macOS.

A tool like Vagrant and libvirt ensure that you can run a specific version of a specific operating system on demand. You get a quick environment that's easy to replicate across several developers. This is great for testing code, confirming compatibility, and for testing out new versions of a library or toolkit.

3. Support and documentation

Bug reports can be very specific, and sometimes all it takes is a look at GDB to determine the cause of a problem. Other times, however, a bug report comes in that's not about the code but the process itself. For instance, a user might complain about the layout of an application, or the way an application interacts with some element on the desktop, or how to accomplish a complex configuration. In cases like those, you might need to try to replicate the user's workflow, and sometimes that requires running exactly what the user is running.

I've done this several times in the past when I've needed to describe to a user the exact steps to take, on their distribution, to achieve a goal. General statements weren't enough. I installed a fresh copy of the distribution my users were running, and documented the steps, complete with screenshots. If they couldn't get it to work, then I was confident that the problem wasn't their setup.

4. Architecture

Containers use your operating system's CPU. A virtual machine uses an emulated CPU. If there's software you need to run that wasn't compiled on the CPU you have on your machine, then you have to run a virtual CPU.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 5. Some other OS

Containers are Linux. When you run a container, you're running Linux in a container, regardless of whether you're running that container on Windows or Mac.

To run Windows, whether it's for support, legacy services, or development, you have to virtualize it. Apple continues to enforce, to put it politely, a "complex" legal requirement around virtualizing macOS, but when it is permitted it happens in a virtual machine. Or maybe you're on Windows or macOS but want to run a Linux distribution with a desktop as a way to get comfortable with a new OS. Virtual machines are a pragmatic and easy way to have a spare computer without actually having a spare computer.

Linux virtualization eBook

Virtual machines are an easy way to gain access to a software-defined computer for everyday tasks. And there are a lot of options for how to interact with your virtual machines, including GNOME Boxes, Vagrant, VirtualBox, or even Qemu directly. Whether you're new to virtualization or you've used it in the past, download our complimentary eBook for a tour of all the latest options, specialized configurations, and ideas on you might use a fleet of virtual machines!

Containers are a vital technology for modern infrastructure, but virtual machines still have their place.

Image by:

Opensource.com

Linux What to read next eBook: Everyday virtualization on Linux This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Run Git on a mainframe

Thu, 04/13/2023 - 15:00
Run Git on a mainframe pleia2 Thu, 04/13/2023 - 03:00

One of the fascinating things I keep encountering in my journey to learn everything I can about the mainframe world is how my expertise in Linux distributed systems and open source tooling carries over into this realm. I recently discovered zigi, an independently developed open source (GPLv3+) Git interface for IBM z/OS ISPF (Interactive System Productivity Facility).

I had been aware of zigi for some time, but it wasn't until I joined a recent z/OS Open Tools guild call that I could soak in the demo that zigi contributor Lionel B. Dyck gave. That led me to a call with zigi founder Henri Kuiper, who explained that zigi was an answer to a specific pain point of his. That sounds familiar! I could definitely appreciate the story of an open source project born from frustration.

I need to explain ISPF for you to have a good understanding of what zigi provides.

ISPF

Since the 1980s, ISPF has been a common interface for interacting with IBM mainframes. Mainframe professionals still use it today with the modern versions of IBM z/OS and z/VM. The text-based interface, accessed with a 3270 terminal, features a series of menus, panels, and even an editor, that those proficient in the interface are incredibly fast with.

Image by:

(Elizabeth K. Joseph, CC BY-SA 4.0)

Users like Henri discovered while using ISPF that their organizations were rapidly adopting technologies familiar to the new generation of technologists. These tools, like Git, were sometimes difficult to integrate into the ISPF interface.

Enter zigi.

How zigi helps

The integration of zigi allows for Git commands to be built into the ISPF interface. That means Git command navigation is natural for ISPF users, who can simplify their working tech stack without adding on yet another tool with yet another interface to learn.

Also, note that z/OS works a bit differently than Linux or Windows-focused administrators and developers are used to. Instead of having a filesystem (such as EXT4, XFS, FAT, and so on) with a file and directory hierarchy, z/OS uses the concept of datasets. Git only understands files, so zigi must do some work here. It creates a sort of translation so that the remote Git repositories you may ultimately write to are still files, but they're a series of datasets when used inside the z/OS environment. Zigi handles this seamlessly for the user—an important distinction and a key part of what zigi does.

I'm excited about what this means for developers working with ISPF, but it's also great for systems folks in the organization looking to integrate with their mainframe counterparts. With today's tooling, you can bring mainframe development into your CI system. That all starts with making sure you have access to the revision control system your mainframe developers work with. So hold on tight, and get ready for some green screens.

(I'm joking, it's not all green, and the zigi home screen is quite delightful!)

Image by:

(Elizabeth K. Joseph, CC BY-SA 4.0)

Use zigi

Software or the z/OS Open Tools team. Then pull in the zginstall.rex installation file from the zigi Git repository. That's it!

For more detail, visit the official zigi documentation.

Next, create a repository or add a remote repository that's already managed by zigi from somewhere like GitLab or GitHub. These actions begin with the create and remote commands, respectively. The linked zigi documentation walks you through the rest.

If you're not sure whether a repository is zigi-managed, look for a populated .zigi folder at the top level. Being a zigi-managed repository is important because of how zigi works internally to manage the translation between the files, folders, and datasets.

A loaded repository looks similar to this screenshot of zigi's own repository loaded up in zigi (how's that for inception?):

Image by:

(Elizabeth K. Joseph, CC BY-SA 4.0)

Want to start exploring the repository? No problem. Say you want to see what's under ZIGI.EXEC in this repository. Use the interface to navigate to and select the desired partitioned dataset. In the screenshot above, that is IBMUSER.ZIGI317.ZIGI.EXEC. You're taken to a screen that looks similar to this:

Image by:

(Elizabeth K. Joseph, CC BY-SA 4.0)

Now you can get to work. It's valuable to look at what actions zigi supports on your repository from within the interface. Here is a command list:

Image by:

(Elizabeth K. Joseph, CC BY-SA 4.0)

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

For anyone who has used Git before, a lot of this should look very familiar, even if the UI differs from what you expect.

To seasoned ISPF users, this screen is familiar from the other direction. You may be learning Git, but at least you're used to the options are presented in the UI.

Wrap up

As you can see, zigi already implements many of the basic, and not so basic, commands you need to work on your repository. And with zigi being an actively maintained project with several contributors, that support is growing.

What I ultimately love most about zigi is how it shows the ubiquity of Git these days. In the realm of mainframes, I still encounter many proprietary revision control systems, but that pool is shrinking. As organizations move to consolidate their codebases and even bring different operating systems into their CI pool, tools like zigi help teams make that transition and support a streamlined development process for everyone.

The zigi project is always looking for new contributors, including those who can bring unique insight and talent to the effort, so be sure to check out zigi.rocks to learn more.

The zigi application helps translate files to datasets and a command-line interface to ISPF.

Image by:

Opensource.com

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 open source modules to make your website accessible

Wed, 04/12/2023 - 15:00
7 open source modules to make your website accessible neerajskydiver Wed, 04/12/2023 - 03:00

As website accessibility continues to be a growing concern, website owners and developers need to ensure that their websites comply with the Americans with Disabilities Act (ADA). Drupal, a popular open source content management system (CMS), offers various tools and modules to ensure your website is accessible to all users, regardless of their abilities. This article discusses the importance of website accessibility, the basic requirements of ADA compliance, and how Drupal can help you achieve compliance.

Why website accessibility is important

Website accessibility is important for several reasons. First, it ensures that people with disabilities can access and use your website. This includes people with visual, auditory, physical, and cognitive disabilities. By making your website accessible, you are not only complying with the law but also providing a better experience for all users.

In addition, website accessibility can improve your website's search engine optimization (SEO) and increase your website's usability. Search engines prioritize websites that are accessible, and users are more likely to stay on your website longer and engage with your content if it is easy to use.

Basic requirements for ADA compliance

The ADA requires all websites and digital content to be accessible to people with disabilities. Some of the basic requirements for ADA compliance include the following:

  • Providing alternative text descriptions for all images and non-text content.
  • Ensuring that all videos have captions and transcripts.
  • Using color contrast and other design elements to make your website more readable.
  • Providing alternative ways to access content, such as audio descriptions and keyboard navigation.
  • Ensuring that your website is compatible with assistive technologies, such as screen readers and braille displays.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets How Drupal can help you achieve compliance

Drupal offers various tools and modules to help you achieve website accessibility and ADA compliance. Here are the seven I find most useful:

  1. Accessibility Checker: Scans your website for common accessibility issues and suggests improvements.

  2. Color Contrast Analyzer: Analyzes the color contrast of your website and makes changes to improve readability.

  3. Block ARIA Landmark Roles: Enhances WAI-ARIA use in your site's markup. This module allows you to assign ARIA landmark roles and/or ARIA labels directly to every block in your site's layout through the block configuration form.

  4. Civic Accessibility Toolbar: Provides accessibility tools to help users with disabilities navigate and interact with websites.

  5. htmLawed: Ensures compliance with web standards and admin policies by limiting and purifying HTML.

  6. Text Resize: Enhances web accessibility by displaying an adjustable text size changer or a zoom function on the page.

  7. Automatic Alternative Text: Leverages the Microsoft Azure Cognitive Services API to create an image caption when there's no Alternative Text (alt text).

Wrap up

Ensuring website accessibility and ADA compliance is important for legal reasons and for providing a better user experience for all users. From the Accessibility Checker to the Color Contrast Analyzer, Drupal offers various ways to ensure your website is ADA-compliant. Using Drupal's tools and modules, you can make your website accessible to everyone, regardless of their abilities, and offer a better user experience.

Use these Drupal modules to make your website accessible to everyone.

Drupal Accessibility What to read next A simple CSS trick for dark mode This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Know your inner quantum with open leadership

Wed, 04/12/2023 - 15:00
Know your inner quantum with open leadership Ashish Lotangane Wed, 04/12/2023 - 03:00

This isn't an article about quantum computing. I was watching Ant-Man and The Wasp: Quantumania and this line of dialogue got me thinking:

"If there is one thing life has taught me, it's that there's always room to grow."

It made me think about quantum concepts. I've always believed in continuous improvement and trusted that there is always room for improvement. I believe you must have a passion for personal growth, be able to contribute to your well-being, and look for opportunities that are helpful to organizations.

The word "quantum" is mesmerizing, strange, and scintillating. It means the "smallest discrete unit of a phenomenon." For me, it's about a mindset. It's the idea that thinking has power and that thinking power has the potential to affect reality. Quantum computing and quantum theories are moving towards becoming reality. If you take the time to learn from it, you can understand your inner quantum.

In this supersonic age of competition, technology is evolving every day. It is going through a revolution, and it is changing the paradigms. There are numerous questions and uncertainties. Leaders are expected to play a bigger role in these dynamics. In addition, when it comes to oneself, there are eternal questions that pop up. Who am I? What is my purpose as a human being? Why do I exist? How can I make a difference?

I do not want to sound philosophical here, nor am I an expert on quantum theories. However, as a transformation leader, I would like to share my thoughts around knowing one's inner quantum self and how it can lead to a better version of ourselves. In addition, I explain how it can contribute to an organization's success, behavior, and culture.

Leadership starts with leading yourself before leading others. Being a good leader requires being a good person and the need to be authentic. Leaders should have a desire to create a better future. Leaders should be a person of integrity who can inspire others, show the way forward, take on challenges, and rise to the occasion. Below are key characteristic approaches that can guide your thinking to the inner quantum phenomenon. This can help your self-consciousness unlock your inner potential for greater good. It is all connected!

Image by:

(Ashish Lotangane, CC BY-SA 4.0)

Mind and intuition

Your mind is very powerful. Its power is used to find answers to eternal questions. Leaders listen to the problem and with an open mind, form creative insights. After careful thinking, they provide a solution. Yes, it's a slow and long process, but it is a much-needed one. Moreover, that is how they build intuition. This happens due to deeper mind power. Ultimately, it creates intrinsic energy.

Awareness

The more you know about yourself, the more you become aware of your surroundings. Going deeper into self-knowledge helps to focus more on progress. A single person with clarity of conscience and the willingness to speak up can make a huge difference.

Awareness leads to desire, knowledge, and understanding. A crystal clear awareness with purpose and positive vibes creates an influential aura with empathy. You should surround yourself with people of your community that share the same interests and desires.

Alignment

Contributing to a greater good is a deep and fundamental human need. Alignment is a key aspect to keep up with vision, values, strategy, and change. If you want to have the right alignment you should start asking "why" questions.

Everyone is uniquely different and diverse. Bringing a community together and having a common goal leads to empowerment, autonomy, and a sense of purpose. It takes one person to change the conscience of an organization. When a leader, skillfully brings a voice and a vision, others will follow.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Honesty, kindness, and humility

The social media era is changing everything. It is changing how you react to situations, how you think, and how you interact with others. Being honest, kind, and humble in day-to-day work helps you react to changing dynamics. It creates positivity. To bring more positivity one must train to expect them to occur. It sets the tone for engagement, collaboration, and involvement. Always follow this logic: add your plus (positives) and subtract your minus (negatives)!

Gratitude

A simple "thank you" brings a smile to people's faces and creates a special moment. The quality of being thankful and showing appreciation embraces the purpose of fulfillment. Practicing gratitude boosts relationships and partnerships. It focuses on positive aspects of day-to-day life.

Studies show, leaders who practice gratitude as their attitude are more effective, influential, and respected. It builds goodwill and a culture of innovation with a high degree of collaboration.

Energy

Turn up your enthusiasm and energy. The more energy you put out, the more you get back. Focus your energy on what you like and what you want to do. Knowing what motivates and demotivates you, activates your energy. This is a driving factor for increased speed and agility. If you have positive energy, others react in a good way to that energy.

Learning from quantum concepts can lead to characteristics that future leaders need. It provides everyone with an opportunity to reflect, fine-tune, and handle complex problem-solving. It enlightens conscious and subconscious thinking. Finding new avenues for growth and building on existing strengths is key for leaders. You can bring certain things into reality if you have positive energy.

Quantum concepts can guide open leadership.

Image by:

opensource.com

Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 open source principles that help organizational governance

Tue, 04/11/2023 - 15:00
5 open source principles that help organizational governance johnpicozzi Tue, 04/11/2023 - 03:00

Throughout my career, I have been fortunate to work with many organizations of various sizes on a variety of projects. All of these projects had open source software at their core, and most contributed what they could back to the open source community. I recently worked on a greenfield project using open source software within a large organization. After the MVP phase of the project, the organization's leadership was interested in learning what led to project success and how they could apply it to other teams across the organization. Upon reflection, I saw similarities between our team's way of working and open source communities and development. The following are some insights into how open source principles can help organizations save money, reduce technical debt, and bust internal silos.

1. Better spent budgets

I recently delivered a talk on Headless Omni-Channel Web Platforms at the Florida Drupal Camp. One of the key benefits highlighted in the presentation was how to save money by implementing this web platform. The idea comes from open source software development. Different groups within an organization can use their budget to contribute features or functionality to a core software or platform. They can also team up with other groups to pool dollars for more complex features. When the feature development is done, it's added to the core software or platform and available for all. Using this open source principle can provide mutual benefits to groups within an organization. Allowing the sharing of features and functionality and collectively benefiting from each other's spending can improve the software or platform.

Another aspect of this approach that saves money and allows for continuous improvement is the ability to test and develop a feature once and reuse it repeatedly. We frequently see this when creating a web platform that uses a component-based design system as a starting point. Users of the platform can reuse components or features developed by other users. Often, these have already been tested in numerous ways, such as user experience, accessibility, and even security testing.

This simple idea faces opposition in many organizations as individual groups covet and protect budgets. Groups don't want to give up their budgets to support the core software or platform. In some cases, differences in priority and opinion add to siloing in many institutions.

2. Reduce technical debt

Many organizations strive to reduce technical debt. Implementing a comprehensive core software or platform and using open source principles can help reduce technical debt. This happens by allowing development teams to think fully about how a feature impacts not just the group building it but the wider organization. This, plus collaboration with other groups within an organization, can help reduce the need for rebuilding or adding functionality in the future.

Sometimes organizations struggle with this type of collaboration and thinking because of internal competitiveness. Some companies foster a culture where being the first to build a feature or come up with an idea is rewarded. This can lead to groups not working together or sharing ideas, fostering silos within the organization and greatly hindering innovation.

3. Faster time to market

One of the terms I hear frequently is "Faster time to market." Everyone wants to get their thing out quicker and easier. This is often a benefit of a core software or platform, as internal groups can reuse existing, tested, and proven features and functionality instead of building their own from scratch. If your group is starting a project, and it could start from 80% complete instead of 0% complete, would you do it? I'm thinking yes. Now pile on the superhero feeling of adding needed functionality for other users. It's a win-win!

4. Release excitement

Another great open source principle that can help your organization is a release schedule that builds excitement. When your organization implements a core software or platform, users are invested in when updates come out. A release schedule and roadmap can communicate this to them. These two tools can help users to get excited about new features and plan their own roadmaps accordingly. It also helps build appreciation for other teams and pride for the teams building new features. This can unify an organization and allow for an organizational sense of teamwork and accomplishment while providing structure and a plan for the future.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets 5. A core team and governance

I have found you need two key items to overcome the above-noted obstacles and successfully apply open source principles within your organization. These are: A core team and solid organizational governance. A core team will allow for one group to maintain and manage your organization's core software or platform. It will support the solution and ensure new features and functionality are added wisely. This team can help to reduce the cost to internal teams and inform groups of roadmap features. The core team needs to be supported by strong organizational governance. This governance will provide groups within the organization with a common direction and organizational support to succeed. This organizational governance can mimic open source governance and principles in several ways. The most basic and highest level principle is community and the idea of working together toward a common goal.

Open leadership

Adopting organizational governance based on open source principles can lead your organization to reduce cost, lower technical debt, increase team collaboration, foster innovation, and, above all, propel your organization forward together.

Adopting organizational governance based on open source principles can lead your organization to reduce cost, lower technical debt, increase team collaboration, foster innovation, and, above all, propel your organization forward together.

Image by:

Opensource.com

Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 tips to make the most of your next tech conference

Tue, 04/11/2023 - 15:00
7 tips to make the most of your next tech conference gkamathe Tue, 04/11/2023 - 03:00

I recently had the opportunity to visit two technical conferences in February 2023, both geared towards open source software. I was a presenter at Config Management Camp, in Ghent, Belgium, and an attendee at FOSDEM in Brussels, Belgium. This article aims to highlight my experiences at the conferences and to provide you with some tips on how to make the most of such an opportunity whenever it arises.

Have a purpose

Different people attend conferences for different reasons. Some people are presenters of a certain topic or area of knowledge or interest. Other people are attendees that want to gain knowledge from these talks and to network with other like-minded individuals. There are also attendees that are representing their companies. You most likely fall into one of these categories. Knowing what you wish to gain out of a conference is the first step to a successful conference visit. If you are a presenter, it means being proficient in whatever it is you are presenting. If you are an attendee, you should have a sense of what you want out of the conference.

Know the venue and schedule

FOSDEM is a huge conference with at least six thousand people attending it in a span of two days. Not surprisingly, for a conference catering to such an audience, a number of talks happen at the same time. It is next to impossible to attend all talks that are of interest to you. Usually, such large conferences are hosted at a spacious venue like a university or a conference center. Because the area is so huge, the talks are spread across the venue based on specific topics. The talks have a fixed schedule, so you might have to move quickly from one side of the venue to another. The map of the venue is easily available on the venue's website. It makes sense to arrive at the venue a bit early on the first day and familiarize yourself with it. This helps save time when you are rushing out at the end of one talk to rush to another.

Take notes

It's one thing to focus and enjoy the talk while it's happening live. However, your mind can only retain so much. Sure, folks try to use their phones to the fullest by taking pictures of the slides that are being presented (along with the speaker). This is good if you wish to quickly update on social media about the talk that you are attending. However, it's not very effective for note-taking. Usually, the material on the slides is minimal. But if the speaker explains something in depth on the stage, you might miss out on the explanation. I recommend carrying a notepad and a pen with you at all times. You can even bring your laptop for note-taking. The idea is to make quick one-liner notes about interesting tidbits during the talk so you can revisit them later. You can always ask the speaker questions toward the end.

Network and collaborate

A conference is probably the best place to hang out with like-minded individuals. They are interested in the same topics as you. It's best to make use of this time to understand what work is being done on the topic of interest, see how folks solve interesting problems, how they approach things, and get a pulse of the industry in general. You are at the conference for a limited time, so make sure to get introduced to folks working on things that matter to you. This is a good opportunity to gather information for communicating with them later. You can exchange personal information such as email, Mastodon, LinkedIn, and so on.

Make time for booths and swag

Most technical conferences have booths from different companies or upstream projects wanting to market their products and services. To attract more walk-ins at the booths, a variety of swag items are often kept as an attraction available for free (in most cases). These goodies are usually stickers, cool water bottles, fun gadgets, soft toys, pens, and so on. Be sure to collect them so you have something for your co-workers and friends back home. Visiting booths shouldn't be just about the swag. You should use this opportunity to talk to people from different companies (even if they are competitors) to understand what they have to offer. Who knows, you might get knowledge of future projects!

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Relax

Traveling for a conference shouldn't be just about work. It is also about taking a break from your usual busy schedule and relaxing. Chances are you are traveling to a different country or city that you haven't visited before. The conference, talks, and technical discussions are all important. However, they are only part of the whole experience. The other half of the experience is the travel which opens one up to another country, its culture, its people, the food, the language, and a different way of life. Take a step back and enjoy all these experiences and make lifelong memories. I recommend finding some famous landmarks to visit at the place of your stay. You should also try the local cuisine, or you can just chat with the locals. In the end, you will discover another part of yourself that you thought never existed.

Write about your experience

Once you are back from the conference, don't just forget about it and go back to your regular schedule as if nothing happened. Use this opportunity to write about your experiences, and share which talks you found the best and why. What are the key takeaways from the conference and the travel? You should document what you learned. You should reach out to the people you met at the conference. You can also follow social media posts on things that you might have missed out on.

Conclusion

Conferences are one of the perks of the tech industry. I suggest everyone go to one sometime during their career. I hope this article helped shed some light on how to make the most when visiting a technical conference.

Conferences are one of the perks of the tech industry. Here's how to make the most of the ones you attend.

Image by:

Opensource.com

Conferences and events What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Remove the background from an image with this Linux command

Mon, 04/10/2023 - 15:00
Remove the background from an image with this Linux command Don Watkins Mon, 04/10/2023 - 03:00

You have a great picture of yourself and want to use it for your social media profile, but the background is distracting. Another picture has a perfect background for your profile picture. How do you combine the two? Some smartphone apps that do this kind of photo manipulation but are too expensive or are riddled with ad-ware. And they aren't open source. Rembg is up to the challenge!

Rembg is written in Python, so install Python 3 on your computer. Most Linux distributions include Python 3 by default. You can check your version with this simple command:

$ python3 --version

Rembg requires at least Python 3.7 and no greater than Python 3.11. In my case, I have Python 3.10.6 installed.

Install Rembg on Linux

I created a directory called PythonCoding on my Linux laptop and then made a Python virtual environment:

$ python3 -m venv /home/don/PythonCoding

Next, I installed rembg using pip:

$ python3 -m pip install rembgCombine images

Time to work some magic. First, I chose the image containing a picture taken at All Things Open in 2019.

Image by:

(Don Watkins, CC BY-SA 4.0)

I ran the following rembg command to rename it with a shorter filename for convenience:

$ rembg i dgw_ato.jpeg dgw_noback.jpg

The first time you run rembg, it downloads an open source pattern recognition model. This can be over 100 MB and rembg saves it in your user directory as ~/.u2net/u2net.onnx. The model is the U-2-Net and uses the Apache 2.0 license. For more information about the pattern recognition models (including how to train your own), read the Rembg documentation.

It created my new photo without the background in about ten seconds. I have a Ryzen 7 with 16 GB of RAM. Your experience may vary depending on your hardware.

Image by:

(Don Watkins, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I have used GIMP to remove backgrounds in the past, but rembg does it quicker and more completely than I have experienced with GIMP.

That's all there is to removing a background. What about adding a new one?

Add a new background

Next, I want to add a new background to the picture. There are different ways to do that. You can, for instance, combine images with ImageMagick, but getting the frame size right can be complex. The easiest way is to use GIMP or Krita.

I used GIMP. First, open the newly created image (ato_image.jpg in my case). Now go to the File menu and select Open as layers. Choose a different image for the background. This image opens as an overlay above the existing photo.

I wanted to move the new background below my portrait. On the right of the GIMP window are two thumbnails, one for each image layer. The background layer is on top. I dragged the background layer beneath my portrait image, and here's the result:

Image by:

(Don Watkins, CC BY-SA 4.0)

That's a much nicer setting for my profile picture!

Try Rembg

Rembg has three subcommands you can review in the --help menu:

$ rembg --help

They are:

  • rembg i for files
  • rembg p for folders
  • rembg s for HTTP server

Rembg is released with an MIT license. Try it the next time you need a background removed from an image.

The power of Python makes image editing easy on Linux.

Image by:

Opensource.com

Python Linux Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A search engine for Creative Commons

Sat, 04/08/2023 - 15:00
A search engine for Creative Commons Don Watkins Sat, 04/08/2023 - 03:00

Are you looking for content that is openly licensed that you can reuse? Then you might be interested in Openverse. Openverse is an innovative tool that searches over 300 million pictures from an aggregation of different databases. It goes beyond just searching for an image by giving users access to tags created by machine learning models and one-click attribution. With so many visuals to explore, users can find the perfect image to make their project more engaging. The content comes from a variety of sources, including the Smithsonian, Cleveland Museum of Art, NASA, and the New York Public Library.

In 2019, the CC Search tool provided by the Creative Commons site was adopted by the WordPress project. Openverse is the new incarnation of CC Search.

Currently, Openverse only indexes images and audio-visual content.  Searches for video are available from external sources. Plans are in place to add additional representations of open-access texts, 3D models, and more. They have one common goal: Grant access to the estimated 2.5 billion Creative Commons licenses and public domain works available online. All the code utilized is open source.

Please be aware that Openverse does not guarantee that the visuals have been correctly provided with a Creative Commons license or that the attribution and any other related licensing information collected are precise and complete. To be safe, please double-check the copyright status and attribution information before reusing the material. To find out more, please read the terms of use in Openverse.

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles Openverse search

Using Openverse is easy. Enter your search term in the Search for Content field and press Enter. I did a simple search for "Niagara Falls" and received over 10,000 results for images and two results for audio. On the display's far right is a dialog box to check for content that can be used commercially and another for content that allows modification and adaptation.

In addition, a second checkbox allows you to specify which Creative Commons license you want to use or reuse, including CC0 (Public domain), CC-BY, CC-BY-SA, all the way to CC-BY-NC-ND.

Credit where credit is due

When using openly licensed content, it's important to make sure that you provide proper attribution and comply with the license terms that have been stipulated by the original creator of the content. For more information about Creative Commons licenses, consult the Creative Commons website.

Openverse is an open source project which means you can host your own copy or contribute to the project. The project has a contributor guide for folks who want to get involved. The project also welcomes your proposals for new features and functionality.

Find images and audio with open licenses.

Image by:

Mennonite Church USA Archives. Modified by Opensource.com. CC BY-SA 4.0

Art and design Audio and music WordPress What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 best practices for PatternFly, an open source design system

Fri, 04/07/2023 - 15:00
5 best practices for PatternFly, an open source design system abigaeljamie Fri, 04/07/2023 - 03:00

Have you ever admired the facets of a gemstone? The angles and slants are a thing of beauty. You can see that a multi-faceted gemstone shines brighter than a flat one. You may also see this kind of beauty when analyzing a multi-faceted design system. A design system is a collection of guidelines, standards, and resources for creating consistent and unified user interfaces (UI). Like the facets of a diamond, an open source design system rich with diverse contributions and community engagement ultimately leads to better product experiences.

The PatternFly project is an open source design system for Red Hat products. But open source doesn't end with PatternFly's code. Behind PatternFly is a team of people who create designs completely in the open. From designers and developers to researchers and writers, we work together to operate as an open source community.

Our secret? We don't have one — we work in the open, remember? However, we use these five best practices. I'll share them here so that you too can power your own design system with open source.

1. Contribute collectively

We have a core PatternFly design team to design, maintain, and evolve the design system. But we encourage and welcome contributions from everyone. If you have a passion for collaboration and a knack for user experience (UX), PatternFly wants to hear from you.

2. Build community

Nothing created in a silo makes its way to PatternFly. We believe design is better in the open. This is why we include the community in all updates, changes, and additions. We collect feedback on contributions from people across design and development so that everyone has a say in what gets implemented. We also seek input and collaboration from people across multiple design disciplines. This is done to break free from any bias or assumption. This kind of open design makes our design system stronger. It also strengthens our blossoming community of people who engage with or contribute to PatternFly (we lovingly refer to them as Flyers).

3. Loop in everyone

If you find that brainstorming ideas with others results in solutions better than any one person would have dreamed of, then you already think like a Flyer. We have regular design meetings where contributors present their ideas and discuss design approaches in a group setting. This enables us to keep our ideas collaborative and consider designs from all angles. Additionally, we host monthly community meetings so that we can connect with Flyers from across the globe and share the latest updates. You can catch all of our past meeting recordings on our PatternFly YouTube channel.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets 4. Listen to users

As a community, we aim to have all PatternFly contributions lead to functional and beautiful product experiences across different contexts. To make that a reality, we hold ourselves accountable to break out of our own bubbles and engage with users. We work with UX researchers to test updates, changes, and additions with users — such as visual themes and interactions — to ensure that we're creating designs, resources, and experiences that solve for everyone, not just people like us.

5. Create connections

PatternFly is the thread of consistency through products across Red Hat's portfolio. Everyone has the creative freedom to build what best serves their users. But we work as a team to connect product groups through the design system for a more unified user experience. PatternFly resources are easy to access and open to all. This helps us create connections and squash silos.

Come design in the open with us

Whether you're a team of 1 or 100, or whether your design system is open source or not — there's always room for a little collaboration and community in everything we do. Tell us how things turn out for you by connecting with the PatternFly community. We can't wait to hear from you.

PatternFly is a design system with open code and an open community.

Image by:

Photo by UX Store on Unsplash

Web development Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Make a web-safe color guide with Bash

Thu, 04/06/2023 - 15:00
Make a web-safe color guide with Bash Jim Hall Thu, 04/06/2023 - 03:00

When computer displays had a limited color palette, web designers often used a set of web-safe colors to create websites. While modern websites displaying on newer devices can display many more colors than the original web-safe color palette, I sometimes like to refer to the web-safe colors when I create web pages. This way I know my pages look good anywhere.

You can find web-safe color palettes on the web, but I wanted to have my own copy for easy reference. And you can make one too, using the for loop in Bash.

Bash for loop

The syntax of a for loop in Bash looks like this:

for variable in set ; do statements ; done

As an example, say you want to print all numbers from 1 to 3. You can write a quick for loop on the Bash command line to do that for you:

$ for n in 1 2 3 ; do echo $n ; done 1 2 3

The semicolons are a standard Bash statement separator. They let you write multiple commands on a single line. If you were to include this for loop in a Bash script file, you might instead replace the semicolons with line breaks and write out the for loop like this:

for n in 1 2 3 do echo $n done

I like to include the do on the same line as the for so it's easier for me to read:

for n in 1 2 3 ; do echo $n doneMore than one for loop at a time

You can put one loop inside another. That can help you to iterate over several variables, to do more than one thing at a time. Let's say you wanted to print out all combinations of the letters A, B, and C with the numbers 1, 2, and 3. You can do that with two for loops in Bash, like this:

#!/bin/bash for number in 1 2 3 ; do for letter in A B C ; do echo $letter$number done done

If you put these lines in a Bash script file called for.bash and run it, you see nine lines showing the combinations of all the letters paired with each of the numbers:

$ bash for.bash A1 B1 C1 A2 B2 C2 A3 B3 C3Looping through the web-safe colors

The web-safe colors are all colors from hexadecimal color #000 (black, where the red, green, and blue values are all zero) to #fff (white, where the red, green, and blue colors are all at their full intensities), stepping through each hexadecimal value as 0, 3, 6, 9, c, and f.

You can generate a list of all combinations of the web-safe colors using three for loops in Bash, where the loops iterate over the red, green, and blue values.

#!/bin/bash for r in 0 3 6 9 c f ; do for g in 0 3 6 9 c f ; do for b in 0 3 6 9 c f ; do echo "#$r$g$b" done done done

If you save this in a new Bash script called websafe.bash and run it, you see an iteration of all the web safe colors as hexadecimal values:

$ bash websafe.bash | head #000 #003 #006 #009 #00c #00f #030 #033 #036 #039

To make an HTML page that you can use as a reference for web-safe colors, you need to make each entry a separate HTML element. Put each color in a element, and set the background to the web-safe color. To make the hexadecimal value easier to read, put it inside a separate element. Update the Bash script to look like this:

#!/bin/bash for r in 0 3 6 9 c f ; do for g in 0 3 6 9 c f ; do for b in 0 3 6 9 c f ; do echo "#$r$g$b" done done done

When you run the new Bash script and save the results to an HTML file, you can view the output in a web browser to all the web-safe colors:

$ bash websafe.bash > websafe.html Image by:

(Jim Hall, CC BY-SA 4.0)

Programming with Bash Bash scripting cheat sheet for developers Download: Bash tips and tricks An introduction to programming with Bash A sysadmin's guide to Bash scripting Latest Bash articles

The web page isn't very nice to look at. The black text on a dark background is impossible to read. I like to apply some HTML styling to ensure the hexadecimal values are displayed with white text on a black background inside the color rectangle. To make the page look really nice, I also use HTML grid styles to arrange the boxes with six per row and some space between each box.

To add this extra styling, you need to include the other HTML elements before and after the for loops. The HTML code at the top defines the styles and the HTML code at the bottom closes all the open HTML tags:

#!/bin/bash cat< Web-safe colors div { padding-bottom: 1em; } code { background-color: black; color: white; } @media only screen and (min-width:600px) { body { display: grid; grid-template-columns: repeat(6,1fr); column-gap: 1em; row-gap: 1em; } div { padding-bottom: 3em; } } EOF for r in 0 3 6 9 c f ; do for g in 0 3 6 9 c f ; do for b in 0 3 6 9 c f ; do echo "#$r$g$b" done done done cat< EOF

This finished Bash script generates a web-safe color guide in HTML. Whenever you need to refer to the web-safe colors, run the script and save the results to an HTML page. Now you can see a representation of the web-safe colors in your browser as an easy-reference guide for your next web project:

$ bash websafe.bash > websafe.html Image by:

(Jim Hall, CC BY-SA 4.0)

Use the for loop in Bash to create a handy color palette for the web.

Image by:

John Morgan on Flickr, CC BY 2.0

Web development Art and design Linux Bash What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to lead through change with open leadership

Thu, 04/06/2023 - 15:00
How to lead through change with open leadership Ashish Lotangane Thu, 04/06/2023 - 03:00

Change is hard. It often brings discomfort, anxiety, and confusion. Even as an Agile enthusiast, I sometimes feel I'm not welcoming change the way I should.

Change is often hard because the predecessor of change is chaos. Being in chaos is a natural part of the change process and an integral part of evolution. If chaos is handled poorly, it may result in inefficiencies, stress, demotivation, loss of direction, and poor performance. However, it also presents an opportunity to rethink, reorganize, refresh, reboot, experiment, and invent.

Open leadership is critical here. The Open Organization defines open leadership as a mindset and set of behaviors that anyone can learn and practice. Open leaders think and act in service to another person, group, team, or enterprise attempting to accomplish something together.

Open leaders acknowledge change, lead it with a generative-lean-agile mindset, and welcome it with intuition, focus, and enthusiasm.

Optimize to strengthen

Open leadership helps assess and understand the need for and the impact of change. It provides trust, transparency, and alignment with a vision. Open leaders simplify things by optimizing and prioritizing workflows.

In the path to open leadership, keep in mind that it takes time to develop a vision, alignment, and roadmap. Open leaders are optimistic and positive people. They understand their strengths and weaknesses. They make pragmatic decisions, listen to opposing points of view, and facilitate actions based on a set of values, processes, and culture. With team structure and governance, open leaders optimize processes to strengthen the vision.

Engage to leverage

Open leaders utilize feedback with constant adjustments in highly collaborative environments. Open leaders with clarity of conscience and willingness to speak up can make a difference. They believe in experimentation and early adaptation. They know very well that ideas spark innovation and further ignite potential. They understand that innovation is a product of creativity and an engine of change that results from feedback and failures. Transformation, revolution, realignment, and evolution are simply outcomes of this culture.

During change, open leaders invest in employee training and learning, communicate effectively, and provide everyone with opportunities and resources to unlock their potential and thrive. They build trust and demonstrate a high degree of personal integrity. They mold a group of individuals into a loyal and dedicated team.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Empower to excel

Contributing to the greater good is a deep and fundamental human need. Open leaders provide a vision for this where others do not. They bring the power of open culture and values by investing in skill building, taking responsibility, and expressing appreciation for the efforts of others.

The empathy of open leaders plays a critical role in encouraging others to embrace change. They remove obstacles and build community to provide a common understanding and safe environment for all. By decentralizing decision-making, open leaders give more authority to their employees. That empowerment provides the autonomy to excel further.

Give to receive

Whenever I say that one should celebrate not only success but also failures, I see eyebrows raised. I strongly believe leaders should celebrate failures—as long as they are taking note of learnings, validating those learnings, and implementing a plan of action to address those learnings. Failures, if celebrated rightly, lead to more wins.

Open leaders embrace the culture and characteristics of people, groups, and organizations. They allow people and teams to be themselves, and they understand that the action of giving and receiving gratefully has a powerful impact on partnership. It leads to sharing knowledge and caring about outcomes.

Finally, open leadership is infectious: Open leaders do not create followers, they create more leaders.

[ Learn how open leaders drive organizational agility. Download the eBook. ]

Anyone can learn and use the qualities of open leadership to help their teams through times of transition.

Image by:

Opensource.com

Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source community analysis with actionable insights

Wed, 04/05/2023 - 15:00
Open source community analysis with actionable insights cdolfi Wed, 04/05/2023 - 03:00

Organizations are increasingly adopting open source software development models and open source aspects of organizational culture. As a result, interest in how open source communities succeed is reaching an all-time high.

Until recent years, measuring the success of open source communities was haphazard and anecdotal. Ask someone what makes one community more successful than another, and you will likely get observations such as, "The software is great, so the community is too," or "The people in this community just mesh well." The problem with these evaluations is not that they are necessarily wrong, but that they don't provide information that others can use to reproduce successful results. What works for one community is not necessarily going to work for another.

Research universities, businesses, and other organizations interested in determining what makes open source projects successful have begun to collaborate on finding ways to measure aspects of community in a qualitative and data-driven way. One of the more prominent efforts is CHAOSS, a Linux Foundation project focused on creating metrics, metrics models, and software to better understand open source community health on a global scale. Unhealthy projects hurt both their communities and the organizations relying on those projects, so identifying measures of robustness isn't just an interesting project. It's critical to the open source ecosystem.

CHAOSS is a great tool for looking at a pressing set of questions. First, how should community health be defined? Second, as metrics begin to take shape, how can we transition from reacting to one-off requests for data-based information about a given community to creating an entire process pipeline, literally and theoretically, for this work? The development of Project Aspen is the culmination of this pipeline, which will ultimately bring community data analysis to everyone.

Collecting community data

In 2017, Harish Pillay created Prospector with the aim of presenting information from core data sources in a graphical dashboard. This resonated with CHAOSS, which had a goal to better understand the health of open source communities. Prospector was donated to CHAOSS in 2017. Project Aspen builds upon that work.

Aspen is backed by a database generated from the Augur Project, a CHAOSS-based project that collects, organizes, and validates the completeness of open source software trace data. With this database, we can store all types of data points around the Git-based repositories from which we collect data, such as pull requests, reviews, and contributors. The data is already collected and cleaned, which, from a data science perspective, is where the most significant time drains occur. The continued data collection allows us to act agilely when questions arise. Over time, we will grow our pipeline to collect data from many other avenues in addition to Git-based repositories, such as Stack Overflow and Reddit.

As Augur regularly collects data on our selected repositories, the data is updated within a week and cleaned. With all the data collection and most preprocessing already completed, we are much better equipped to answer the analysis questions we receive and generate our own questions too. No matter where the questions come from, the same analysis process is necessary.

For every visualization or analysis, community leaders need to consider these questions:

  • What perspective are you looking to gain or give?
  • What question can you directly answer from the data available to you?
  • What assumptions am I making, and what biases may I hold?
  • Who can I work with to get feedback and a different perspective?

Everyone's individual experiences and expertise impact the lens through which they look at a problem. Some people have experience in code review, while others' expertise lies in community management. How can we start comparing community aspects like apples to apples instead of oranges? Quantifying what people in different roles in open source are looking at when examining a project community can address this problem.

Community metrics empower all members to communicate in a common domain and share their unique expertise. Different perspectives lead to further insights, and Project Aspen uses data to make those insights more accessible to the entire community through data visualizations.

Assumptions and analysis

Analysis is a tool for narrative building, not an oracle. Data analysis can help take the ambiguity and bias out of inferences we make, but interpreting data is not simple. A bar chart showing an increase in commits over time is not, by itself, a positive indicator of community health. Nor is a stable or decreasing number always a negative sign. What any chart gives you is more information and areas to explore.

For instance, you could build from a commits-over-time visualization, creating a graph that plots the "depth" of a commit, perhaps defined as the number of line changes. Or you could dive into the specific work of your community to see what these trends actually represent.

Comparing an issues-over-time graph (Figure 1) to an issues staleness graph (Figure 2) is a great illustration of why perspective matters. These visualizations reflect the same data but reveal completely different insights. From the issue staleness graph, we can see not only how many issues are open, but how many have been open for various time intervals.

This figure shows that over many months, there's relative consistency in how many issues are opened and closed:

Image by:

(Cali Dolfi, CC BY-SA 4.0)

On the other hand, this figure highlights the growing number of issues that have been open for over 30 days:

Image by:

(Cali Dolfi, CC BY-SA 4.0)

The same data populates each graph, but a fuller picture can only come from seeing both. By adding the perspective of the growth in issue staleness, communities can clearly see that there is a growing backlog of issues and take steps to understand what it means for their community. At that point, they will be well-equipped to devise a strategy and prioritize actions based on both good data and thoughtful analysis.

Using data wisely

Including multiple points of view also provides much-needed insight and helps guard against false positives and gamification. Economists have a saying: "When a measure becomes a target, it ceases to be a good measure." In other words, measures used to reward performance create an incentive to manipulate measurement. As people learn which measures bring attention, money, or power, open source communities run the risk of encouraging actions taken just to play the system. Using multiple perspectives to define success will keep your metrics meaningful, so they have genuine value in maintaining your community.

To that end, Project Aspen is an exciting tool for building your own knowledge and making better decisions about communities. Whether you want to understand where your community is most vulnerable or the seasonality of activity within the community, having quality data to inform your analysis is essential. To see some of the work being done around community data analysis, please check out our Git repositories or the demo 8Knot app instance.

This article was originally published with Red Hat Research Quarterly and has been republished with the author's permission.

Project Aspen plans to enable quantitative open source community health analysis for all.

Image by:

Opensource.com

Community management Data Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I learned the hard way to keep my website updated

Wed, 04/05/2023 - 15:00
How I learned the hard way to keep my website updated dboth Wed, 04/05/2023 - 03:00

A few days ago, I received an email from a reader of one of my books. Among other things, he said that he was having trouble getting to one of the websites I'd referenced in the book. I responded that I would check it out. Usually, something like this is due to a misprinted URL in the referring article or book, or it could be that I'd deleted or changed a page on my website.

That was not the case this time. When I clicked on the link to my website, I was faced with—horror of horrors—an online casino.

I thought this would turn out to be a simple case of DNS man-in-the-middle or something similar. Certainly, nothing would be wrong on my own server.

Finding the problem

I use Google Domains as my registrar. Before doing anything else, I checked to ensure that my IP addresses were correct. They were.

I logged into a remote Linux host that I have access to, and performed a traceroute with MTR (Matt's TraceRoute). That indicated that the route to my host was correct.

This did not look good.

Next, I looked at my httpd.conf and verified that it was correct. I did find a couple of non-related configuration issues, and fixed those, but they didn't affect the problem at hand. I isolated my network from the internet, and tried my website again. I have internal DNS that works for that. I still came up with the invasive website. That was proof positive that the problem was an infection of my own server.

None of that took long. I was just working under the assumption that the problem was elsewhere rather than on my own server. Silly me!

I finally looked at my server's WordPress installation. I was hoping that the database hadn't been infected. I could have recovered from anything by wiping it all out and restoring from backups, but I was hoping to avoid that if possible. Unfortunately, the html directory of my website had some noticeable, "extra" files and one new directory. The html/wp-admin/admin.php file had also been replaced.

I was fortunate to have multiple other websites that weren't infected, so it was easy to compare file dates and sizes. I also keep complete daily backups of my websites in order to recover from problems such as this.

Fixing the problem

The fix, in this case, was surprisingly easy. WordPress is quite easy to install, backup, move, reinstall, and restore. I started by deleting the obvious extra files and directory. I copied the known good files from my backups over the infected ones. I could have simply restored everything from the last known good backup and that would have worked as well. I compared the good backup with the recovered website and all looked good.

The database for the website was not affected in any way, which I verified with manual review of a data dump.

The real problem

After analyzing the problem, I realized that I was the root cause. My failure to ensure that WordPress was properly updated for this website allowed this to happen. I use separate instances of WordPress for each of my websites, so the others were not affected because they were being updated automatically.

A series of issues led to this failure of mine.

  1. The affected website had been set up with a different email address that I'd stopped using a few months ago. This prevented me from getting the usual notices that upgrades were available.

  2. I'd also failed to configure that site for automatic updates from WordPress.

  3. And I didn't bother to check to see whether the site was being updated.

When it was attacked, the site was at least one full release level behind what was available. The ones that were kept up to date were not affected by this attack.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles What I learned

Having written many books and articles in which I discuss the necessity to keep systems updated with the latest versions of operating system and application software, I'm frankly embarrassed by this. However, it has been a good learning experience for me and a reminder that I must not become complacent. I almost didn't write this article! I didn't want to admit to being negligent with one of my own systems. And yet, I felt compelled to write about it in the hope that you learn from my experience.

So, as I have learned from painful experience, it is critical to keep our systems updated. It's one of the most vital steps in the continuing battle to prevent the computers under our care from being infected. The specific details of the infection I experienced are less important than the fact that there are always attacks taking place against our systems. Complacency is one of the attack vectors that crackers can count on to aid their efforts.

My mistake was a good learning experience for me and a reminder that I must not become complacent.

Sysadmin Web development Security and privacy WordPress What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 2 Comments Register or Login to post a comment. Don Watkins | April 5, 2023 Register or Login to like

Thanks David for a good reminder to keep Wordpress updated. I don't host my own site but just yesterday I did a security check on the site.

pdecker | April 5, 2023 Register or Login to like

If I knew the site was a Wordpress site that would be the first thing I would suspect. Wordpress is so commonly used for websites it's a huge target for hackers to attack. Updates are extra important for Wordpress sites.

Pages