opensource.com

Subscribe to opensource.com feed
Updated: 16 min 27 sec ago

How I use the Git for-each-ref command for DevOps

Mon, 04/04/2022 - 15:00
How I use the Git for-each-ref command for DevOps Evan "Hippy" Slatis Mon, 04/04/2022 - 03:00 Up Register or Login to like.

For most of today's developers, using Git is akin to breathing, in that you can't live without it. Along with version control, Git's use has even expanded in recent years into the area of GitOps, or managing and versioning configurations through Git. What a lot of users don't realize or think about is that Git tracks not only file changes for each commit but also a lot of meta-data around commits and branches. Your DevOps can leverage this data or automate IT operations using software development best practices, such as with CI/CD.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

In my case, I use an automated process (DevOps) whereby a new branch is created every time I promote an image into a downstream CI/CD environment (namespace) in Kubernetes (here is a shameless plug of my Opensource.com article describing the process). This allows me to modify the deployment descriptors for a particular deployment in a downstream CI/CD environment independent of the other environments and enables me to version those changes (GitOps).

I will discuss a typical scenario where a breaking bug is discovered in QA, and no one is sure which build introduced the bug. I can't and don't want to rely on image meta-data to find the branch in Git that holds the proper deployment descriptors for several reasons, especially considering I may need to search one local repository or multiple remote images. So how can I easily leverage the information in a Git repository to find what I am looking for?

Use the for-each-ref command

This scenario is where the for-each-ref command is of some real use. It allows me to search all my Git repository's branches filtered by the naming convention I use (a good reason to enforce naming conventions when creating branches) and returns the most recently modified branches in descending sort order. For example:

$ git clone git@github.com:elcicd/Test-CICD1.git
$ cd Test-CICD1
$ git for-each-ref --format='%(refname:short) (%(committerdate))' \
                   --sort='-committerdate' \
                   'refs/remotes/**/deployment-qa-*'
origin/deployment-qa-c6e94a5 (Wed May 12 19:40:46 2021 -0500)
origin/deployment-qa-b70b438 (Fri Apr 23 15:42:30 2021 -0500)
origin/deployment-qa-347fc1d (Thu Apr 15 17:11:25 2021 -0500)
origin/deployment-qa-c1df9dd (Wed Apr 7 11:10:32 2021 -0500)
origin/deployment-qa-260f8f1 (Tue Apr 6 15:50:05 2021 -0500)

The commands above clone a repository I often use to test Kubernetes deployments. I then use git for-each-ref to search the branches by the date of the last commit, restrict the search to the branches that match the deployment branch naming convention for the QA environment, and return the most recent five. These roughly (i.e., not necessarily, but close enough) correspond to the last five versions of the component/microservice I want to redeploy.

deployment-qa-* is based on the naming convention:

--

The information returned can be used by developers or QA personnel when running the CI/CD redeployment pipeline to decide what version to roll back/forward to in a Kubernetes namespace and thus eventually return to a known good state. This process narrows down when and what introduced the breaking bug in the contrived scenario.

While the naming convention and scenario above are particular to needs and automated CI/CD processes, there are other, more generally useful ways to use for-each-ref. Many organizations have branch naming conventions similar to the following:

--

The ID value refers to the ID describing the feature or bug in a project management system like Rally or Jira; e.g.:

v1.23-feature-12345

This ID allows users to easily and quickly get some added visibility into the greater development history of the repository and project (using refs/remotes/**/v.123-feature-*), depending on the development process and branch naming convention policies. The process works on tags, too, so listing out the latest pre-prod, prod, or other specific versions could be done almost as easily (not all tags are pulled by default).

Wrap up

These are only particular and narrow examples of using the for-each-ref. From authors to commit messages, the official documentation provides insight into many details that can be searched, filtered, and reported.

Search and filter branches and tags for useful information or practical DevOps.

Git DevOps CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The open source way with artist Jasmine Becket-Griffith of Strangeling.com

Sun, 04/03/2022 - 15:00
The open source way with artist Jasmine Becket-Griffith of Strangeling.com Seth Kenlon Sun, 04/03/2022 - 03:00 Up Register or Login to like.

If you're a fan of fantasy art, or if you've been to the shops in Disney World, or may you used to hang around Hot Topic as a teen, then you probably know the work of artist Jasmine Becket-Griffith. Her paintings of mythical characters are in a style that has defined a subgenre in modern fantasy fandom and, for me, recalls the art of Wendy Pini's Elfquest combined with the somber mood of Edward Gorey. Jasmine's paintings proved long ago that scary could be cute, and that "cute" could be stunningly magical. And now she's proving that genre-defining art can be open.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Recently, Jasmine Becket-Griffith has placed hundreds (625 to be exact) of high-resolution scans of her paintings in the public domain (or Creative Commons, as needed) so that they're free for everyone to use, re-use, and repurpose. As a longtime fan of fantasy art in general and an admirer of her work, I wouldn't have believed it myself if the announcement hadn't come from her official website, Strangeling.com. I contacted her for more information and found that she was no stranger to open source or the idealism of free culture.

Q: Your illustrations are instantly recognizable and very popular. Why did you choose to make them free for the public to download and use?

Jasmine Becket-Griffith: I had many reasons for gifting my artworks to the public, the primary being exactly that—the public wants them. The public has always had such a positive response and immediate connection to my work, and at some point over the years I believe a "tipping point" was reached where the demand was impossible to meet through traditional channels.

I feel the only (and inevitable) way I can keep up and supply my content to meet the public demand is to remove all barriers between myself as the artist and the public as the consumer. By forfeiting the traditional protections of intellectual property/copyright law and using Creative Commons Open Commercial License (CC0), I've removed the threat of third-party governance of my images, even by me. By offering the works for free or a suggested donation, I have leveled the playing field for fans or entrepreneurs alike to all have open access to the content.

By not involving middlemen, such as licensors and agents, I've granted freedom to any individual or company to be creative or profit without bureaucracy, fees, or predatory egos.

By forfeiting my IP rights on the images, I've also finally relinquished myself of the impossible copyright quagmire of being a modest individual person expected to somehow have the legal resources of a massive brand to litigate against piracy. There aren't enough hours in the day to even file the DMCA takedown notices the way the current system is set up. The only option I had, in this particular instance, to cease becoming a victim was to decriminalize the crime by forfeiting my ownership rights.

By building a simple click-and-download retrieval system, I've created an easy interface for all people, all ages, worldwide, to immediately have access to the high-resolution scans of 625 for my original acrylic paintings for any use.

Fans in Australia can print their own posters at home instead of requiring unnecessary international shipping and the resources involved. Wal-Mart's graphic design department can make t-shirts featuring immediately recognizable, branded, and delightful characters made from my careful high-resolution scans. They can choose to send me a few dollars if they want to. Students and educators can now, free of charge and restriction, download the files to zoom in and see brushstrokes (and cat hairs and fingerprints) in high resolution to help learn about some of the innovative ways I have painted with acrylic paints over the decades.

Crafters and entrepreneurs are encouraged to use the Strangeling images and characters as content to repurpose in their own product lines at art fairs, farmer's markets, or Etsy shops.

Aspiring comic artists, video game designers, TikTok-ers, existing animation giants—all are welcome to utilize the Strangeling Public Domain Project as a shared resource for any purpose that inspires them creatively or commercially and with my explicit blessing.

To help me continue to spend my time creating new content and painting new paintings, it's my conviction that enough of the public will see the value in what I am doing and that many will choose to support the Strangeling Public Domain project by donating to the Tip Jar or pledging $1 or more for staggering additional content up at my Patreon.

Image by:

(Jasmine Becket-Griffith, CC0)

Q: From the sheer number of drawings released, you're obviously a prolific artist. What inspires and drives you?

JBG: Oh, that is the question. I'm a compulsive painter; it dominates my time to the exclusion of most else. There's something in me that feels that it is important that I continually try to translate visually the things and ideas from this world that I find mysterious and magical.

It takes many forms, and inspiration comes from everywhere. By removing a lot of the commercial pressures involved with creating, I am now inspired to paint more pieces both for myself and for the public.

Q: "Compulsive painter" must be an understatement. You've released over six-hundred works!

JBG: In the initial stages of project development, I basically picked a good sampling of previously released images that would fit on a thumb drive.

All of the 625 pieces in the initial launch of the Strangeling Public Domain Project are my original acrylic paintings, painted in acrylic paints and water on either canvas or wood panel. It actually represents only a portion of the paintings I have done since I started Strangeling in 1997.

Image by:

(Jasmine Becket-Griffith, CC0)

Q: Do you primarily paint digitally, with media, or a mix of both?

JBG: I never paint digitally—I see digital painting as being a completely different sort of physical activity than painting with paint and wood. It is mostly the activity of creating the painting that interests me as an artist. The way my mind works, I see it just as different as playing a scuba-diving video game on your couch as opposed to being underwater in the ocean and having wet hair. If you spent a day doing one as opposed to the other, your day would look completely different. The paintings I do are with acrylic paints (mostly I use Golden brand fluid acrylics), a little tap water, a piece of wood or MDF panel, or canvas. I paint with my fingers and with cheap vegan-friendly brushes.

Q: Do you use any open source software in your art?

JBG: Not to actually paint, but I rely a tremendous amount on open source content for research materials, museum databases for historic painting references, and other channels that have a similar concept driving them.

In a way, I see the Strangeling Public Domain Project as an attempt at democratizing fine art and commercial image licensing as a sort of "Open Source Art Project."

Q: How important is shared culture for artists?

JBG: Very. Artists are basically translators or conduits; it's our responsibility to take our observations and repackage them into something consumable by the public. This way, the message is translated.

As painters, musicians, or other content providers, we must use our skills to decorate these messages aesthetically or viscerally, to create a shared emotional bond. This is how we share and create new culture.

Q: How important is art for society?

JBG: Maybe the most. After all, what else are we struggling for? I don't know if there is a better definition of what society and culture actually are if it's not the traditional call and response of content creator and content consumer—between artist and viewer, between musician and listener. Society needs artists as decoders to translate and write our culture, just as we need our computer coders to construct the framework upon which we build our digital society.

Image by:

(Jasmine Becket-Griffith, CC0)

Q: Why did you decide to become a professional artist?

JBG: I think in the end, I had no choice. It was the natural evolution. I began selling my artwork door-to-door at age five and started Strangeling when I was seventeen. I was going to spend all of my time painting anyway—I might as well make a living out of it and try to do something that engages the rest of the world while I'm at it.

Q: Now that the Strangeling Public Domain Project has officially launched, what's next?

JBG: One of the delightful things about the project is that it has opened me up to spend time on projects that require a lot of my personal attention and let me explore new territory.

I'll be doing more new paintings with the Walt Disney Company, LucasFilm, and Pixar—copyrighted, of course by Disney, but painted by me for their WonderGround Galleries, Disneyland, and Disney World theme park locations. Disney has the unique position of having me create licensed works featuring the official Disney characters with their copyright, but painted by me—Jasmine Becket-Griffith—a very commercial but undeniably American cultural keystone.

I've been collaborating with painter David Van Gough for our co-branded Death & the Maiden™ collection. Being involved with jointly owning a brand new intellectual property has been cathartic in a way that could only have been predicated by having my back catalog of previous work being made Public Domain.

I sit down with artist Jasmine Becket-Griffith to discuss open source, free culture, and why she placed hundreds of high-resolution scans of her paintings in the public domain.

Image by:

(Jasmine Becket-Griffith, CC0)

Open Studio Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Monitor your databases with this open source tool

Fri, 04/01/2022 - 15:00
Monitor your databases with this open source tool Dave Stokes - … Fri, 04/01/2022 - 03:00 Up Register or Login to like.

I have been using databases for a lot longer than I care to admit, and a lot of that time has been spent looking at the entrails of servers, trying to determine exactly what they were doing. Thank goodness for the engineers behind the MySQL Performance Schema and Information Schema and their efforts to provide solid information. And then came the Sys schema with handy prepackaged peeks at the server. Before the advent of those schemas, there was no easy way to get granular information about a database instance.

But peering at tabulated displays at the information of one point in time does not allow for trend-spotting or a quick glance to ascertain a server's status. Being able to spot a trend on a graph or have alerts sent when a threshold is reached is vital. My friends in the PostgreSQL and MongoDB worlds had the same problem. The good news is that there is an open source solution for all three databases that is easy to install and use.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

I am a fairly new employee at Percona but have been around MySQL for a long time. One of my first goals was to learn how to set up and use Percona Monitoring and Measurement (PMM). I know a whole slew of folks who use it happily, but I had only worked with it for a few moments at a trade show. My natural trepidation at installing anything that provided graphical information linked to a database was established well before open source databases were around. This hesitation is rooted in attempts to configure other monitoring software for proprietary databases.

The TL;DR is that PMM is simple to install and use. The documentation is well written, detailed, and handy. The software itself is easy to obtain and install. The overall experience is a ten out of ten.

Test case

I like to try new stuff on old laptops, so I installed a fresh copy of Ubuntu 20.04 LTS. Next, I read the PMM documentation.

I should say I read the documentation twice while muttering it can't be that simple. At the end of this article you will find the commands I entered to install the PMM server and the agents (MySQL, PostgreSQL, and Mongo). It's a cookbook showing how to recreate this test case.

Here are the Percona Database Distributions:

Prerequisite

The prerequisite for installing PMM is Docker, which proves to be the most intensive part of the entire installation. Thankfully the package management software for Ubuntu (apt) makes this very simple. 

Installing the Percona Monitoring and Measurement server

Installing the PMM server is simple for anyone familiar with the APT package manager. Pay attention to the output that provides the URLs for connecting to the PMM server. In my case, the addresses were https://127.0.0.1:443/, https://192.168.1.209:443/, and https://172.17.0.1:443/.

You will have to log in to the PMM dashboard (default account and password are admin and admin).

Image by:

(David Stokes, CC BY-SA 4.0)

After a successful login, the PMM displays a dashboard.

Image by:

(David Stokes, CC BY-SA 4.0)

At this time, PMM is only monitoring the underlying system and the database it uses for gathering statistics—that is what the Monitored Nodes and Monitored DB counts in the above image are both displaying 1.

The database clients

The clients monitor the database instances, and again clients are simple to install with the package manager. The next step is registering the client with the server:

$ sudo pmm-admin config --server-insecure-tls --server-url=https://admin:admin@127.0.0.1

The final steps are configuring the database server. In the example below, Percona's MySQL is installed, and an account is created for gathering statistics. Finally, the client collects information from the database instance:

$ sudo pmm-admin add mysql --username=pmm --password=pass --query-source=perfschema

For PostgreSQL and MongoDB, replace the name of the database on the add option to the respective instance you desire to monitor.

Image by:

(David Stokes, CC BY-SA 4.0)

You will notice that the Monitored Node count increased to 2 above.

PPM gives you access to many different views of what the server is doing. You can see the overall health of the server itself.

Image by:

(David Stokes, CC BY-SA 4.0)

The general MySQL dashboard displays the overall health of the system.

Image by:

(David Stokes, CC BY-SA 4.0)

And you can easily study the load on the system.

Image by:

(David Stokes, CC BY-SA 4.0)

Or study individual queries.

Wrap up

Percona Monitoring and Management is an open source tool to monitor your MySQL, MongoDB, or PostgreSQL instances. It is easy to install and provides great insight into your servers.

Please let the author have any feedback or questions you have on this subject.

Cookbook

The following are the commands and responses needed to install PMM and Percona's MySQL on a fresh installation of Ubuntu 20.04 LTS.

1. Install Docker $ sudo update-manager

$ apt-get install ca-certificates curl gnupg lsudo sb-release

$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

$ sudo apt-get update2. Install PMM Server $ curl -fsSL https://www.percona.com/get/pmm | sudo /bin/bash

Gathering/downloading required components, this may take a moment

Checking docker installation - installed.

Starting PMM server...
Created PMM Data Volume: pmm-data
Created PMM Server: pmm-server
    Use the following command if you ever need to update your container by hand:
    docker run -d -p 443:443 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:2

PMM Server has been successfully setup on this system!

You can access your new server using the one of following web addresses:
    https://127.0.0.1:443/
    https://192.168.1.209:443/
    https://172.17.0.1:443/

The default username is 'admin' and the password is 'admin' :)
Note: Some browsers may not trust the default SSL certificate when you first open one of the urls above.
If this is the case, Chrome users may want to type 'thisisunsafe' to bypass the warning.

Enjoy Percona Monitoring and Management!3. Add the Percona repo $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb

$ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

$ sudo apt-get update

Get:1 http://repo.percona.com/percona/apt focal InRelease [15.8 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu focal InRelease                                                                     
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease                                
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease                              
Hit:5 http://security.ubuntu.com/ubuntu focal-security InRelease                                 
Get:6 http://repo.percona.com/prel/apt focal InRelease [9,779 B]                                 
Hit:7 https://download.docker.com/linux/ubuntu focal InRelease                                   
Get:8 http://repo.percona.com/percona/apt focal/main Sources [4,509 B]
Get:9 http://repo.percona.com/percona/apt focal/main amd64 Packages [18.1 kB]
Get:10 http://repo.percona.com/percona/apt focal/main i386 Packages [414 B]
Get:11 http://repo.percona.com/prel/apt focal/main i386 Packages [750 B]
Get:12 http://repo.percona.com/prel/apt focal/main amd64 Packages [851 B]
Fetched 50.2 kB in 2s (22.7 kB/s)
Reading package lists... Done4. Install the agents $ sudo apt-get install pmm2-client

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  pmm2-client
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 78.0 MB of archives.
After this operation, 195 MB of additional disk space will be used.
Get:1 http://repo.percona.com/percona/apt focal/main amd64 pmm2-client amd64 2.26.0-6.focal [78.0 MB]
Fetched 78.0 MB in 9s (9,078 kB/s)                                                                                            
Selecting previously unselected package pmm2-client.
(Reading database ... 144323 files and directories currently installed.)
Preparing to unpack .../pmm2-client_2.26.0-6.focal_amd64.deb ...
Adding system user `pmm-agent' (UID 127) ...
Adding new group `pmm-agent' (GID 134) ...
Adding new user `pmm-agent' (UID 127) with group `pmm-agent' ...
Creating home directory `/usr/local/percona' ...
Unpacking pmm2-client (2.26.0-6.focal) ...
Setting up pmm2-client (2.26.0-6.focal) ...
Created symlink /etc/systemd/system/multi-user.target.wants/pmm-agent.service → /lib/systemd/system/pmm-agent.service.

$ sudo pmm-admin config --server-insecure-tls --server-url=https://admin:admin@127.0.0.1:443

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.5. Install Percona MySQL $ wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb --2022-03-08 12:57:38-- https://repo.percona.com/apt/percona-release_latest.generic_all.deb Resolving repo.percona.com (repo.percona.com)... 149.56.23.204 Connecting to repo.percona.com (repo.percona.com)|149.56.23.204|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 11804 (12K) [application/x-debian-package] Saving to: ‘percona-release_latest.generic_all.deb’ percona-release_latest.generic_ 100%[======================================================>] 11.53K --.-KB/s in 0s       2022-03-08 12:57:38 (96.1 MB/s) - ‘percona-release_latest.generic_all.deb’ saved [11804/11804] $ sudo dpkg -i percona-release_latest.generic_all.deb (Reading database ... 144372 files and directories currently installed.) Preparing to unpack percona-release_latest.generic_all.deb ... Unpacking percona-release (1.0-27.generic) over (1.0-27.generic) ... Setting up percona-release (1.0-27.generic) ... * Enabling the Percona Original repository <*> All done! ==> Please run "apt-get update" to apply changes * Enabling the Percona Release repository <*> All done! ==> Please run "apt-get update" to apply changes The percona-release package now contains a percona-release script that can enable additional repositories for our newer products. For example, to enable the Percona Server 8.0 repository use:   percona-release setup ps80 Note: To avoid conflicts with older product versions, the percona-release setup command may disable our original repository for some products. For more information, please visit:   https://www.percona.com/doc/percona-repo-config/percona-release.html $ sudo percona-release setup pdps-8.0 * Disabling all Percona Repositories * Enabling the Percona Distribution for MySQL 8.0 - PS repository Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease Get:2 http://repo.percona.com/pdps-8.0/apt focal InRelease [9,806 B]                               Hit:3 http://us.archive.ubuntu.com/ubuntu focal InRelease                                                      Hit:4 http://repo.percona.com/prel/apt focal InRelease                       Hit:5 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease            Hit:6 https://download.docker.com/linux/ubuntu focal InRelease               Hit:7 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease Get:8 http://repo.percona.com/pdps-8.0/apt focal/main Sources [6,609 B] Get:9 http://repo.percona.com/pdps-8.0/apt focal/main amd64 Packages [67.4 kB] Fetched 83.8 kB in 1s (63.1 kB/s)   Reading package lists... Done $ sudo apt install percona-server-server mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'"     mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'"     mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'" $ sudo apt install percona-mysl-shell mysqlsh root@localhost SQL MODE CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10; GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO 'pmm'@'localhost';6. Start the client $ sudo pmm-admin add mysql --username=pmm --password=pass --query-source=perfschema

MySQL Service added.
Service ID : /service_id/d774faf2-fd3c-4758-9db9-3e1edb65b292
Service name: test2-mysql

Table statistics collection enabled (the limit is 1000, the actual table count is 341).

Percona Monitoring and Management is an open source tool to monitor your MySQL, MongoDB, or PostgreSQL instances.

Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Make a cup of coffee with Git

Fri, 04/01/2022 - 15:00
Make a cup of coffee with Git Moshe Zadka Fri, 04/01/2022 - 03:00 Up Register or Login to like.

Git can do everything—except make your coffee. But what if it could?

Like most people, I already have a dedicated coffee brewing device listening to HTCPCP requests. All that is left is to hook Git up to it.

The first step is to write the client code, using httpx:

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles >>> import httpx
>>> result = httpx.request("BREW", "http://localhost:1111/")
>>> result.text
'start'

Ah, nothing nicer than a coffee pot starting to brew. You need to do a few more steps to make this available to git.

A proper way to do it would be to put this in a package and use pipx to manage it. For now, install httpx into your user environment:

$ pip install --user httpx

Then put this code in a script:

#!/usr/bin/env python
# This script should be in ~/.bin/git-coffee
# Remember to chmod +x ~/.bin/git-coffee
import httpx
result = httpx.request("BREW", "http://10.0.1.22:1111/")
result.raise_for_status()
print(result.text)

Make sure that ~/.bin is in your path:

$ (echo $PATH | grep -q ~/.bin) || echo "Make sure to add ~/.bin to your path!"

Finally, enjoy as your git command allows you to enjoy your morning coffee:

$ git coffee
startThe finer things in life

Python, Git, and coffee are a good combination for any open source programmer or user. I leave the exercise of implementing a coffee brewing terminal to you (maybe you have a spare Raspberry Pi looking for a purpose?) If you don't have a coffee machine configured for HTTP requests, then at the very least, you've learned how easy it is to use Python and the httpx module to make HTTP call requests. So go get yourself a coffee. You've earned it!

I created my own Git command to brew my morning coffee.

Image by:

Pixabay. CC0.

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I customize my Linux window decorations

Thu, 03/31/2022 - 15:00
How I customize my Linux window decorations David Both Thu, 03/31/2022 - 03:00 Up Register or Login to like.

One thing I especially like about Linux is the amazing and vast array of choices in almost everything. Don't like one application for something? There are usually several more you can choose from. Don't like how the desktop works? Pick one of many other desktops. Don't like the window decorations on your desktop? There are many others you can download and try.

What if you don't like one little thing about your choice of window decorations—and all other sets of decorations are not even close?

One of the advantages of open source is that I can change anything I want. So I did.

I use the Alienware-Bluish theme on my Xfce desktop. I like its futuristic look, the cyan and gray colors that match my dark primary color schemes—and sometimes my moods. It has a nice 3D relief in the corners, and the corners and edges are wide enough to grab easily, even with my Hi-DPI resolution. Figure 1 shows the original Alienware-Bluish decorations with the gradient-black-324.0 color scheme I prefer.

Image by:

Figure 1. An active window (with the focus) using the original Alienware-Bluish decorations (David Both, CC BY-SA 4.0)

Two things bother me about this window. First, the intensity of the window name in the title bar for active windows is just too dull for me. The inactive windows have a bright white title that attracts my eye more than the dull cyan color of the active title.

Second, I like dark wallpapers, as you can see in Figure 1. Because the bottom edge of the window does not have a cyan highlight, it can be difficult to determine where the bottom of the windows are located, especially when there are a lot of overlapping windows open.

Pretty minor annoyances, I know, but they just bothered me. And that is one of the coolest things about open source: I can modify anything I want, even for trivial reasons. It just takes a bit of knowledge, which I am sharing with you.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Where are the decoration files?

The first thing I needed to do was locate the files for the decorations I am using, Alienware-Bluish. I know this because of the many decorations I have downloaded over the years.

All of the decorative themes I download are located in the /usr/share/themes/ directory so all users will have access to them. Each theme is located in a subdirectory, so the Alienware-Bluish theme is located in the /usr/share/themes/Alienware-Bluish/xfwm4/ directory. The xfwm stands for xf window manager version 4.

If you install your themes in your home directory, they will be located in the ~/.local/share/themes/Alienware-Bluish/xfwm4 directory. Themes stored in your home directory are not available to other users on your computer.

Preparation

I don't like to work on the original files for anything important like a theme, so I used my own non-root account to copy the /usr/share/themes/Alienware-Bluish directory and its contents to a new directory, /usr/share/themes/Alienware-Bluish-2. This gives me a safe place to work without inadvertently damaging the original beyond repair. It also copies files and changes the ownership of the copied files to my own account, so I can copy and edit the files.

Besides, I want to keep the original so I can continue to use it.

Getting started

View the files in the /usr/share/themes/Alienware-Bluish-2/xfce directory using Thunar or another file manager that lets you view image thumbnails, then zoom in to increase the size of the images. Expand the images so you can see them better. Each *.xpm (X11 Pixmap) file is an image of a small window frame section, as you can see in Figure 2.

Image by:

Figure 2: The files that make up the various segments of a window (David Both, CC BY-SA 4.0)

Notice that the different components each have an active and an inactive version. In the case of this theme, they are mostly the same. I now own these copied files, so I can copy and edit them.

Look especially at the bottom-active.xpm and bottom-inactive.xpm files. These are the two files that define the look of the bottom of the window. These two images are only one pixel wide, so they are essentially invisible in Figure 2. The window manager uses as many instances as necessary to create the bottom edge of the window.

Themes for other desktops may use different file formats.

Making the changes

First, I changed the title color. The themerc file contains text configuration data that defines several aspects of the title bar. This file is an ASCII text file. Here is the content for the theme:

full_width_title=true
title_alignment=center
button_spacing=2
button_offset=30
button_layout=S|HMC
active_text_color=#699eb4
inactive_text_color=#ffffff
title_vertical_offset_active=5
title_vertical_offset_inactive=5

The hex numbers in the text color entries define the colors for active and inactive title text. To change the active title text, I need to determine what value to use in this field. Fortunately, there is a tool that can help. The KcolorChooser can be used to select a color from the color palette, or the Pick Screen Color button can be used to choose a color already displayed on the screen.

I used this color picker to locate the cyan highlight in the side of the window, but I found it just a little too bright for the bottom. I wanted it a bit less bright, so I used the tools on the KcolorChooser to adjust the color and intensity to my preference. You can see the result in Figure 3.

Image by:

Figure 3. Using the KcolorChooser to select a specific color (David Both, CC BY-SA 4.0)

The KcolorChooser can be installed if you don't have it already. On Fedora and other Red Hat-based distros, you can use the following command:

dnf -y install kcolorchooser

If you don't already have the KDE desktop or any of its tools installed, this command will install a large number of KDE libraries and other dependencies. It was already installed on my workstation because I have the KDE Plasma desktop installed.

After deciding which color I wanted, I obtained the hex digits for that color from the HTML text box. I then typed those into the themerc file so the active_text_color line looks like this:

active_text_color=#00f1f1

The next part, changing the bottom-active.xpm image file, is a little more complicated. I used GIMP to modify the bottom-active.xpm file, but you can use any graphics editor you are comfortable with. One catch: the image is so small that it needs to be enlarged by a huge amount to be a reasonable size for editing. I found that 8,000% worked well on my display. You can see this in Figure 4. This image is 6 pixels high by 1 pixel wide and black and shades of dark gray.

Image by:

Figure 4. The bottom-active.xpm file shown at 8,000% magnification in GIMP (David Both, CC BY-SA 4.0)

I used the KcolorChooser to find a shade of cyan a little darker than that on the side and top edges of the window. After some playing around with it, I settled on the shade #10b0ae, which I then copied into the text field of the GIMP colors dialog. I had to add this dialog to the dock area at the upper right of the GIMP window by selecting Menu Bar Tools > Dockable Dialogs > Colors. Alternatively, I could have used the color picker, the eye-dropper icon, in the GIMP Colors dialog to simply pick the color from the sample display area of the KcolorChooser.

At any rate, I now had the color I liked in the GIMP color dialog. I used the Rectangle Select tool to select the 3 pixels highlighted in Figure 5 and the Bucket Fill tool to fill the selected area with the new color. Figure 5 shows the final color.

Image by:

Figure 5. The modified bottom-active.xpm file with the addition of cyan (David Both, CC BY-SA 4.0)

Exporting the revised file

GIMP converted the .xpm file into a data format it could use, but it can't save the data directly into a .xpm file. Instead, I used the export function to save the file. This was not a big deal, but a bit unexpected the first time.

During the export, I was presented with a dialog asking for an Alpha Threshold value. I don't know enough about GIMP or manipulating graphics files to know what that is, so I left it alone and clicked on the Export button.

Testing

The changes I made to this theme are easy to test. I simply used the Window Manager to select the Alienware-Bluish-2 theme. This loads the new theme instantly, so I can see the results right away.

Had I not liked the results, I could have made additional changes and tested again. At this point, however, I would have had to change back to the original Alienware-Bluish theme (or any other theme) and then back to the Alienware-Bluish-2 theme to verify the change. The revised files are not loaded until the theme is re-read.

Figure 6 shows the revised theme using the cyan highlights in the bottom window edge. I think it looks much better.

Image by:

Figure 6. A window showing the altered bottom edge (David Both, CC BY-SA 4.0)

Final thoughts

I had no idea how to fix minor problems and annoyances with window decorations until I started this little project. It did take some time and research to figure out how to do this. I learned there is an xpm graphics format, and I learned a little more about working in GIMP, including how to export into that file format. I also discovered this was a fairly easy change to make.

I still don't feel I have the skill or creative vision for graphics to design a completely new window decoration theme. But now I can easily make minor changes to themes someone else has created.

I can make minor modifications to the Alienware-Bluish theme on my Xfce desktop to suit my aesthetic.

Image by:

Pixabay. CC0 Creative Commons

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 things open source developers should know about cloud services providers

Wed, 03/30/2022 - 15:00
5 things open source developers should know about cloud services providers Seth Kenlon Wed, 03/30/2022 - 03:00 Up 1 reader likes this

"The cloud" refers to both the collective computing power of an interconnected array of servers and the software layer enabling those computers to work together to create dynamically defined infrastructure. Because many consider the cloud the new frontier of computing, it's dominated the software industry for the past several years. Still, your individual level of involvement with it probably depends on your career and how much you acknowledge that you're using the cloud in your computing.

If you're a programmer, you might be looking to move your development onto the cloud, either for work or for fun, but it doesn't take long to realize that choosing a cloud provider can be an overwhelming prospect, especially for an open source enthusiast. I've written about the importance of an open cloud in the past. Luckily, there are very direct ways you, as a developer, regardless of your experience, can help ensure that the cloud fosters and strengthens open source.

Here are five things developers should know about cloud providers and what the cloud means for open source.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects The cloud provider doesn't have to define your platform

To develop software on the cloud, you have two choices. You can build your own miniature cloud, or you can buy time on somebody else's cloud.

Building your own is fun. Given enough contributors to your cluster, it can also be effective. But, if you need your software to grow without practical limits, it's probably not realistic to run your own cloud. Buying into a cloud doesn't have to mean you lose control of your computing. A cloud provider essentially is a vendor between you and virtual infrastructure. You need computing power, and cloud providers are eager to sell it to you.

Just like when you buy a new laptop off the shelf, however, nobody's going to force you to use the closed source bloatware that happens to come along with it. When you rent space on the cloud, you can run as many Linux containers as you want, but the interface you use to create and deploy those containers, and the infrastructure those containers connect to, may not be open source. You can think of your cloud interface as the OS and your containers as your choice of Apache httpd, Postfix, Dovecot, and so on.

To run an open source interface, choose to run an open source console, such as OpenShift (based on the upstream OKD project.) If the cloud provider you end up on doesn't directly offer an open source console, look at a service like Red Hat OpenShift Service on AWS (ROSA) that puts your choices in platform first. 

The cloud is just somebody else's computer, so trust your provider

If you work with, on, or around computers, even tangentially, you're probably dealing with the cloud already. You probably have at least some understanding that when an application is running inside of a browser, it's essentially running on somebody else's computer (that is, a company's array of servers).

There are plenty of reasons to think strongly about whose hardware houses your personal, organizational, and customer data. However, as a developer, there's also reason to consider the toolchain you build your workflow on. Just because you sign up with a cloud provider doesn't mean you can be forced into a specific toolchain. You should never feel hesitant to migrate from a service because you're afraid of having to rebuild your own development environment. Choose a provider that gives you the flexibility to build your environment, your CI/CD pipeline, and your release model in a way that's sustainable for you.

Developing on the cloud still means developing on your computer

If you haven't developed anything on the cloud yet, it may seem foreign to you, but developing on the cloud isn't all that different than developing on your computer. If anything, it enforces really good development practices that you may have been meaning to institute for years.

Whether it's on the cloud or just inches away from your keyboard, you have a development environment to consider. You have libraries you need to track, manage, and update. You have an IDE that helps you with syntax, consistency, variable names, functions and methods, and so on. A good cloud provider lets you use the tools you want to use, whether it's a text editor, a container-friendly IDE, or cloud-aware IDE.

Open standards still matter

Don’t let the compute nodes fool you. Just because bits are being crunched offsite doesn't mean you have to commit your data to a black box. The work of OpenStack is ensuring that the very foundation of the cloud can be open, which brings cloud development and management closer to your desktop than ever. The work of the Open Container Initiative has enabled applications like Podman and LXC to keep containers open (and daemonless and rootless). Open standards and open specifications empower you as a developer to choose the best solution for your work.

When choosing a cloud provider, don't settle for anything less.

We can build an open cloud

The cloud already powers much of the internet, but it has even greater potential the more open it becomes. Supporting open cloud providers using open source technology is important, but it's just as important to help build it. The cloud, just like our personal computers, the internet, and even our day-to-day communities, is only as open as we choose to make it.

Develop using open source and release open source, on the cloud, on the desktop, and everywhere.

Develop using open source and release open source, on the cloud, on the desktop, and everywhere.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Cloud Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Aqua Security is approaching DevSecOps in 2022

Wed, 03/30/2022 - 15:00
How Aqua Security is approaching DevSecOps in 2022 Gaurav Kamathe Wed, 03/30/2022 - 03:00 Up 1 reader likes this

I recently took the opportunity to discuss open source and security challenges with Itay Shakury of Aqua Security. What follows is a fascinating discussion about current issues, the future, and specific cloud-native tools that address the concerns of today's Chief Information Security Officers (CISOs).

Itay, could you please introduce yourself to our readers?

Itay Shakury, Director of Open Source at Aqua Security. I have nearly 20 years of experience in tech, spent across engineering, software architecture, IT, product management, consulting, and more. In recent years, my career path has led me to cloud-native technologies and open source software.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects

Tell us about Aqua Security and what problems is it trying to address?

Aqua is pioneering cloud security with its integrated cloud-native application protection platform (CNAPP) that provides prevention, detection, and response automation across the entire application lifecycle. Our suite of solutions enables organizations to secure the supply chain, cloud infrastructure, and running workloads. Aqua's family of open source projects is an accessible entry-point that allows anyone to get started with cloud-native security immediately and at no cost while at the same time driving innovation for our commercial offerings.

As Director of Open Source at Aqua Security, what are your major responsibilities?

My primary responsibility is developing and executing on open source strategy. The strategy includes refining the OSS projects' roadmap, identifying community initiatives for engagement, and making open source viable for commercial use. As an engineering manager, I am leading Aqua's open source teams. Our OSS group is globally distributed and remote-first. This group of talented open source engineers is turning our OSS vision into reality, and I'm fortunate enough to have been part of it.

What challenges do companies face in securing Kubernetes? How should they approach this problem?

One challenge is addressing security across the complete application lifecycle. In the past few years, more and more responsibilities have been put in developers' hands, especially with Kubernetes and cloud-native technologies. We are seeing this across different fields like quality, operations, support, and security. This "shift left" approach is introducing security controls early (or "left") in the development lifecycle, which obviously is a welcome change, but it leaves the organization with the challenge of bridging these newly added controls with preexisting production security (or "right" side).

[ Download the free eBook: A guide to implementing DevSecOps ]

Aqua Security has a variety of popular open source projects. Can you tell us about them?

We have a portfolio of tools and solutions across three domains: security scanning, Kubernetes security, and runtime security.

For security scanning, our open source project Trivy is leading the way. Trivy scans container images and code repositories for known vulnerabilities in packages and libraries. In addition to that, Trivy scans Infrastructure as Code files for misconfigurations and common security issues. Trivy is very well received in the industry and has a robust and supportive community of contributors, which makes it so successful. We recently celebrated a milestone of crossing 10,000 GitHub stars!

In Kubernetes security, Aqua's Starboard assesses your Kubernetes clusters' security posture. It is powered by our other project, kube-bench, which is already a staple of Kubernetes security. Since Starboard is a Kubernetes operator, it will continuously and automatically detect changes to the cluster and application state and maintain an up-to-date report of your security posture.

Runtime security is about detecting and preventing suspicious behavior during production. Our project Tracee achieves that by leveraging cutting-edge technology–eBPF—and is leading the way for how that technology can be applied in this use case.

The use of the eBPF technology is growing in security applications and tooling (tracee). Has it reached a point where it can go mainstream?

eBPF has been around for a while and has seen real-world usage in some of the biggest technology companies in the world. The technology is solid (especially its recent editions), but it's still not so accessible for developers who are programming with it, nor for users who are adopting it. One of the biggest challenges currently is with building and distributing eBPF-powered applications. Unlike "normal" applications, which the vendor would build and then ship the resulting artifact to users, eBPF-based applications are much more sensitive to environmental nuances and therefore are commonly shipped as source code that the user needs to compile on-site. We have been working with the community and industry colleagues to solve these challenges upstream so that eBPF can be more widely available and accessible. This actually resulted in another open source project we released called "btfhub."

Supply chain security is currently one of the topmost items for CISOs worldwide. What other security issues do you think need our collective focus and attention?

Supply chain is definitely getting a lot of attention. At Aqua, we identified the security gaps that many organizations face, and we acquired a company specializing in supply chain security–Argon Security. Aqua and Argon are working together to address these challenges, and I'm sure that our open source family will soon benefit from it.

Most supply chain solutions rely on implementing tools and practices early in the software development lifecycle. This is part of the movement to "shift left," moving security from production to the developers. I think this movement is great, but stitching together the different tools that the organization adopts across the "left" and "right" side of the house is still a challenge, and this is usually next on a CISO's desk.

Security is a growing field, with many wanting to make it a career. What are the top skills/traits that you prioritize while hiring?

Curiosity is something that I think helps people in engineering but especially in InfoSec. Being intrinsically curious and having the drive to investigate and understand how things work is very helpful for a security engineer.

In open source specifically, we are looking for engineers with an additional layer of skills on top of the core technological proficiency. In particular, we value softer skills that contribute to our approach that the open source engineers not only write the code but also plan the product roadmap, speak about it, promote it, and build a community around it.

What does Itay enjoy doing in his free time?

Technology is a big part of my life, and I'm also drawn to it in my free time. But besides that, spending time with my wife and son, hikes, and good food. I also never miss my morning yoga routine.

I'd like to thank Itay for taking the time to discuss the security concerns we all face in today's cloud-native, containerized world. He has provided some great insights and shows just how many solutions open source software provides.

I sit down with Aqua Security's Director of Open Source to discuss cloud trends, Kubernetes security, hiring for InfoSec jobs, and everything in between.

Image by:

JanBaby, via Pixabay CC0.

Security and privacy Containers Cloud Kubernetes DevOps What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Virtual Kubernetes clusters: A new model for multitenancy

Tue, 03/29/2022 - 15:00
Virtual Kubernetes clusters: A new model for multitenancy Lukas Gentele Tue, 03/29/2022 - 03:00 Up 1 reader likes this

If you speak to people running Kubernetes in production, one of the complaints you'll often hear is how difficult multitenancy is. Organizations use two main models to share Kubernetes clusters with multiple tenants, but both present issues. The models are:

  • Namespace-based multitenancy
  • Cluster-based multitenancy

The first common multitenancy model is based on namespace isolation, where individual tenants (a team developing a microservice, for example) are limited to using one or more namespaces in the cluster. While this model can work for some teams, it has flaws. First, restricting team members to accessing resources only in namespaces means they can't administer global objects in the cluster, such as custom resource definitions (CRDs). This is a big problem for teams working with CRDs as part of their applications or in a dependency (for example, building on top of Kubeflow or Argo Pipelines).

Explore the open source cloud Free online course: Developing cloud-native applications with microservices arc… eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects

Second, a much bigger long-term maintenance issue is the need to constantly add exceptions to the namespace isolation rules. For example, when using network policies to lock down individual namespaces, admins likely find that some teams eventually need to run multiple microservices that communicate with each other. The cluster administrators somehow need to add exceptions for these cases, track them, and manage all these special cases. Of course, the complexity grows as time passes and more teams start to onboard to Kubernetes.

The other standard multitenancy model, using isolation at the cluster level, is even more problematic. In this scenario, each team gets its own cluster, or possibly even multiple clusters (dev, test, UAT, staging, etc.). The immediate problem with using cluster isolation is ending up with many clusters to manage, which can be a massive headache. And all of those clusters need expensive cloud computing resources, even if no one is actively using them, such as at night or over the weekend. As Holly Cummins points out in her KubeCon 2021 keynote, this explosion of clusters has a dangerous impact on the environment.

Until recently, cluster administrators had to choose between these two unsatisfying models, picking the one that better fits their use case and budget. However, there is a relatively new concept in Kubernetes called virtual clusters, which is a better fit for many use cases.

What are virtual clusters?

A virtual cluster is a shared Kubernetes cluster that appears to the tenant as a dedicated cluster. In 2020, our team at Loft Labs released vcluster, an open source implementation of virtual Kubernetes clusters.

With vcluster, engineers can provision virtual clusters on top of shared Kubernetes clusters. These virtual clusters run inside the underlying cluster's regular namespaces. So, an admin could spin up virtual clusters and hand them out to tenants, or—if an organization already uses namespace-based multitenancy, but users are restricted to a single namespace—tenant users could spin up these virtual clusters themselves inside their namespace.

This combines the best of both multitenancy approaches described above: Tenants are restricted to a single namespace with no exceptions needed because they have full control inside the virtual cluster but very restricted access outside the virtual cluster.

Like a cluster admin, the user has full control inside a virtual cluster. This allows them to do anything within the virtual cluster without impacting other tenants on the underlying shared host cluster. Behind the scenes, vcluster accomplishes this by running a Kubernetes API server and some other components in a pod within the namespace on the host cluster. The user sends requests to that virtual cluster API server inside their namespace instead of the underlying cluster's API server. The cluster state of the virtual cluster is also entirely separate from the underlying cluster. Resources like Deployments or Ingresses created inside the virtual cluster exist only in the virtual cluster's data store and are not persisted in the underlying cluster's etcd.

This architecture offers significant benefits over the namespace isolation and cluster isolation models:

  1. Since the user is an administrator in their virtual cluster, they can manage cluster-wide objects like CRDs, which overcomes that big limitation of namespace isolation.
  2. Since users communicate with their own API servers, their traffic is more isolated than in a normal shared cluster. This also provides federation, which can help with scaling API requests in high-traffic clusters.
  3. Virtual clusters are very fast to provision and tear down again, so users can benefit from using truly ephemeral environments and potentially spin up many of them if needed.

[ Learn what it takes to develop cloud-native applications using modern tools. Download the eBook Kubernetes-native microservices with Quarkus and MicroProfile. ] 

How to use virtual clusters

There are many use cases for virtual clusters, but here are a few that we've seen most vcluster users adopt.

Development environments

Provisioning and managing dev environments is currently the most popular use case for vcluster. Developers writing services that run in Kubernetes clusters need somewhere to run their applications while they're in development. While it's possible to use tools like Docker Compose to orchestrate containers for dev environments, developers who code against Kubernetes clusters will have an experience much closer to how their services run in production.

Another option for local development is using a tool like Minikube or Docker Desktop to provision Kubernetes clusters, but that has some downsides. Developers must own and maintain that local cluster stack, which is a burden and a huge time sink. Also, those local clusters may need a lot of computing power, which is difficult on local dev machines. We all know how hot laptops can get during development, and it may not be a good idea to add Kubernetes to the mix.

Running virtual clusters as dev environments in a shared dev cluster addresses those concerns. In addition, as mentioned above, vclusters are quick to provision and delete. Admins can remove a vcluster just by deleting the underlying host namespace with a single kubetctl command, or by running the vcluster delete command provided with the command-line interface tool. The speed of infrastructure and tooling in dev workflows is critical because improving cycle times for developers can increase their productivity and happiness.

CI/CD pipelines

Continuous integration/continuous development (CI/CD) is another strong use case for virtual clusters. Typically, pipelines provision systems under test (SUTs) to run test suites against. Often, teams want those to be fresh systems with no accumulated cruft that may interfere with testing. Teams running long pipelines with many tests may be provisioning and destroying SUTs multiple times in a test run. If you've spent much time provisioning clusters, you have probably noticed that spinning up a Kubernetes cluster is often a time-consuming operation. Even in the most sophisticated public clouds, it can take more than 20 minutes.

Virtual clusters are fast and easy to provision with vcluster. When running the vcluster create command to provision a new virtual cluster, all that's involved behind the scenes is running a Helm chart and installing a few pods. It's an operation that usually takes just a few seconds. Anyone who runs long test suites knows that any time shaved off the process can make a huge difference in how quickly the QA team and engineers receive feedback.

In addition, organizations could use vcluster's speed to improve any other processes where lots of clusters are provisioned, like creating environments for workshops or customer training.

Testing different Kubernetes versions

As mentioned earlier, vcluster runs a Kubernetes API server in the underlying host namespace. It uses the K3s (Lightweight Kubernetes) API server by default, but you can also use k0s, Amazon Elastic Kubernetes Service, or the regular upstream Kubernetes API server. When you provision a vcluster, you can specify the version of Kubernetes to run it with, which opens up many possibilities. You could:

  • Run a newer Kubernetes version in the virtual cluster to get a look at how an app will behave against the newer API server.
  • Run multiple virtual clusters with different versions of Kubernetes to test an operator in a set of different Kubernetes distros and versions while developing or during end-to-end testing.
Learn more

There may not be a perfect solution for Kubernetes multitenancy, but virtual clusters address many issues with current tenancy models. Vcluster's speed and ease of use make it a great candidate for many scenarios where you would prefer to use a shared cluster but also wish to give users the flexibility to administer their clusters. There are many use cases for vcluster beyond the ones described in this article.

To learn more, head to vcluster.com, or if you'd like to dive right into the code, download it from the GitHub repo. The Loft Labs team maintains vcluster, and we love getting ideas on it. We have added many features based on user feedback. Please feel free to open issues or PRs. If you'd like to chat with us first about your ideas or have any questions while exploring vcluster, we also have a vcluster channel on Slack.

Try vcluster, an open source implementation that tackles certain aspects of typical namespace- and cluster-based isolation models.

Image by:

Opensource.com

Kubernetes Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 32 points Open Enthusiast Register or Login to post a comment.

5 key insights for open source project sustainability in 2022

Tue, 03/29/2022 - 15:00
5 key insights for open source project sustainability in 2022 Sean P. Goggins Tue, 03/29/2022 - 03:00 Up 2 readers like this

Many technology firms are turning to open source tools to accelerate innovation and growth. As these firms work to influence open source projects, governance practices sometimes shift from coordination among a small group of developers and firms to management by large communities of contributors and organizations, often with competing priorities.

Sustainable projects require sustainable communities. Adapting to a larger, more competitive open source landscape requires organizations to invest in community building. This demands a view of source-code availability that's inextricably connected to the social engagements of contributors and organizations in open source projects. Many organizations now consider open source community engagement as both a social and a technical—or "sociotechnical"—investment.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The CHAOSS project seeks to improve the transparency and actionability of open source projects and community health. CHAOSS has identified and defined metrics that meaningfully assess open source community health.

Some CHAOSS metrics provide indicators of a wide range of social factors now essential for understanding the shape of sustainable open source communities. Social metrics require more sophisticated collection and interpretation strategies, such as machine learning, as well as techniques like surveys and the specification of proven practices for attracting and retaining new contributors. Analyzing the social dimension of open source projects focuses on understanding the dynamics of human relations and communities.

The insights into open source community health below result from approximately 36 interviews with corporate open source participants. The names of interview subjects quoted here have been withheld to protect their privacy.

Community building

Corporate open source participants must recognize that community building is central to sustaining open source projects. Answers to such questions as "Is the community welcoming?" and "Whose voices are heard?" are important considerations when building and joining communities:

One thing we definitely look for before we're going to engage [in an open source project] is what's the vibe here? And that's often a very social thing. Do they seem to enforce the code of conduct? Are there people in this community that seem abusive … in various ways?

Community activity

Most project contributors recognize the growing significance of community as part of successful open source projects. They want active communities that continue to advance the project's goals in ways that support all members and reflect the variety of opinions found in community work. They want members that are nice to see and work with regularly. Frequent and positive engagement is becoming so important that a lack of activity or proliferation of toxic interactions are common reasons people exit a community. In fact, it can even be more important than any technical support the community provides:

I worked with the [project] board for a while, and it is such an active one. Unfortunately, there's just a lot of derogatory terms used. The thing about open source, it's all usually text-based, [and] that can be harmful. We'll pull people away from contributing if they don't feel comfortable. I've seen people leave different communities based on how things were handled. I think that the [worst] I've seen is basically just derogatory terms, not necessarily based on race, religion, or gender, just somebody angrily lashing out based on the code they don't want to see or do want to see.

Diversity and inclusion

Equally important is how the community addresses diversity, equity, and inclusion (DEI). Potential contributors ask themselves questions such as, "How will I be treated?" and "How will I treat others?" People understand attention to DEI (or lack thereof) as a critical part of project risk and sustainability. Failure of a community to center DEI in the sociotechnical work of a project influences contributors' decision to join or leave an open source community, regardless of what a community may provide technically:

The moment you see any bullying, like racism, bigotry, or anything that looks like excluding others from the conversation—if that gets called out and dealt with, I mean, it's not great that it happened, but either the person took the feedback and improved their behavior, or they were told to leave. [Either is a good outcome.] But if you see that stuff's happening and no one's doing anything about it, and everybody is just sitting on their hands and waiting for somebody else to do something about it, that's problematic. I don't have time for that. That would cause me to leave.

Community culture

The importance of nurturing a sense of community in open source connects closely to contributor expectations of a welcoming environment for people from diverse backgrounds. In response to these considerations, open source projects must constantly reflect on how to signal that the community is healthy, build trust among community members, and lower barriers to community participation in the interest of their own sustainability.

The way an open source community responds to issues is a very strong indicator as to their balance between [being] receptive to feedback and not receptive to direct contamination. I think that a successful open source project is a balance between the confidence that your code can be scrutinized by others and the humility that random people you don't know can improve your code. That balance between confidence and humility is reflected in the way people respond to issues. So that's what I look for.

Healthy competition

The pressures to align corporate interests with project interests can result in oversteering. A business trying to exert a degree of control that undermines a sense of community is harmful to a project and its maintainers. Creating a healthy community requires a balance of corporate control through paid contributions alongside thoughtful community building.

[One large open source company, for example,] has resources assigned to the project that are being driven by their own internal teams, and their priorities are very much not in our control. Collaborating in that situation is not as attractive as finding a way to do our own thing.

Building sustainability means building community

A technical open source asset's significance and quality depend on a mutually respectful social system, and that is a new reality for most corporate open source participants. Effective corporate engagement with open source projects demands attention to a set of paradoxes. To be competitive, a firm needs to contribute to a set of open source projects they don't control, and competitors are working side by side in these projects. To create a communal environment where a project becomes and remains sustainable, participants must set aside their competitive instincts and foster trust. A rising tide of both social and technical concerns floats all boats.

Open source software's role in creating value for technology firms continues to grow because sharing the costs of creating and sustaining core infrastructure is not only attractive but arguably a requirement for doing business. Sustaining these critical technology assets demands such a high number of talented contributors that forming and nurturing communities around open source software is vital.

Healthy open source communities engender a sense of purpose and belonging for their contributors, so that people continue to want to participate or join. Such communities are made of real and diverse people, with their own interests, concerns, and lives. Open source contributors—whether individual or corporate—must build real communities with participants who are interested in each other as well as the project. That requires a thoughtful, attentive, and often retrospective focus on how we build and manage our open source communities.

We would like to thank Red Hat for its generous support of this work.

[ Explore why companies are choosing open source: Red Hat's State of Enterprise Open Source Report ]

New research unveils how corporate open source participation has brought renewed attention to community health.

Image by:

Opensource.com

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 48 points Omaha, NE

Matt Germonprez is the Mutual of Omaha Associate Professor of Information Systems in the College of Information Science & Technology at the University of Nebraska at Omaha. He uses qualitative field-studies to research corporate engagement with open communities and the dynamics of design in these engagements. His lines of research have been funded by numerous organizations including the National Science Foundation, the Alfred P. Sloan Foundation, and Mozilla. Matt is the co-founder of the Association for Information Systems SIGOPEN and the Linux Foundation Community Health Analytics OSS Project (CHAOSS). He and has had work accepted at ISR, MISQ, JAIS, JIT, ISJ, I&O, CSCW, OpenSym, Group, HICSS, and ACM Interactions. Matt is an active open source community member, having presented design and development work at LinuxCon, the Open Source Summit North America, the Linux Foundation Open Compliance Summit, the Linux Foundation Collaboration Summit, and the Open Source Leadership Summit.

| Follow germ Open Enthusiast Open health 34 points United States Open Enthusiast 31 points Cincinnati, Ohio

Elizabeth Barron is the Community Manager for CHAOSS and is a long time open source contributor and advocate with over 20 years experience at companies like GitHub, Pivotal/VMWare Tanzu, and Sourceforge. She is also an author, public speaker, event organizer, and award winning nature photographer. She lives in Cincinnati, Ohio.

| Follow ElizabethN Open Enthusiast 34 points

Kevin Lumbard is a doctoral candidate at the University of Nebraska at Omaha and a CHAOSS project maintainer. His research explores corporate engagement with open source and the design of open source critical digital infrastructure.

| Follow @paper_monkeys Open Enthusiast 250 points Cary, NC

Brian Proffitt is Manager, Community Insights within Red Hat's Open Source Program Office, focusing on content generation, community metrics, and special projects. Brian's experience with community management includes knowledge of community onboarding, community health, and business alignment. Prior to joining Red Hat in 2014, he was a technology journalist with a focus on Linux and open source, and the author of 22 consumer technology books.

| Follow TheTechScribe Open Minded Linux Community Manager Geek Author Contributor Club Register or Login to post a comment.

Scheduling tasks with the Linux cron command

Mon, 03/28/2022 - 15:00
Scheduling tasks with the Linux cron command Don Watkins Mon, 03/28/2022 - 03:00 Up Register or Login to like.

Early in my Linux journey, I came to appreciate the numerous command-line utilities of the operating system and the way they streamlined regular tasks. For example, backing up applications on my Windows server frequently required expensive add-on software packages. By contrast, the tar command makes backing up Linux relatively easy, and it's powerful and reliable too.

When backing up our school district email system, however, I faced a different challenge. Backups couldn't occur during the workday or early evening because people were using the system. The backup had to occur after midnight, and it needed to be reliable. I was used to Windows Task Manager, but what was I going to use on Linux? That's when I learned about cron.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Scheduling tasks on Linux with cron

Cron is a daemon used to execute scheduled commands automatically. Learning how to use cron required some reading and experimenting, but soon I was using cron to shut down our email server, back up the data in a compressed tar file, then restart the email service at 3AM.

The commands for a cron job are stored in the crontab file on a Linux system, which is usually found in /etc/crontab. Display the contents of your crontab file with $ crontab -l.

Edit the crontab file with $ crontab -e.

Some systems default to the Vi editor for cron editing. You can override this setting using environment variables:

$ EDITOR=nano crontab -e

This allows you to use the nano editor to edit your personal crontab (if you don't have one yet, one is created automatically for you).

All crontab commands have parameters denoted by an asterisk until you insert an integer value. The first represents minutes, then hours, day of the month, month of the year, and finally, day of the week.

Comments are preceded by a hash. Cron ignores comments, so they're a great way to leave yourself notes about what a command does and why it's important.

A sample cron job

Suppose you want to scan your home directory for viruses and malware with clamscan every week on Monday at 10AM. You also want to back up your home directory every week on Tuesday at 9AM. Using cron and crontab files ensures that your system maintenance occurs every week whether you remember to run those utilities or not.

Edit your crontab file to include the following, using your own username instead of "don" (my user name):

# Scan my home directory for viruses
0 10 * * 1 clamscan -ir /home/don
# Backup my home directory
0 9 * * 2 tar -zcf /var/backups/home.tgz /home/don

If you're using the nano editor, save your work with Ctrl+O to write the file out and Ctrl+X to exit the editor. After editing the file, use crontab -l to list the contents of your cron file to ensure that it has been properly saved.

You can create crontab jobs for any job required on your system. This takes full advantage of the cron daemon.

Scheduling from the Linux command line

It's no secret that the hardest part of cron is coming up with the right values for those leading asterisks. There are websites, like crontab.guru, that dynamically translate cron time into human-readable translations, and Opensource.com has a cron cheat sheet you can download to help you keep it straight.

Additionally, most modern cron systems feature shortcuts to common values, including:

  • @hourly : Run once an hour (0 * * * *)
  • @daily : Run once a day (0 0 * * *)
  • @weekly : Run once a week (0 0 * * 0)
  • @monthly : Run once a month (0 0 1 * *)
  • @reboot : Run once after reboot

There are also alternatives to cron, including anacron for jobs you want to run regularly but not according to a specific schedule, and the at command for one-off jobs.

Cron is a useful task scheduling system, and it's as easy to use as editing text. Give it a try!

Try this way to conquer challenging scheduling problems right from the Linux command line.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Simplify Java persistence implementation with Kotlin on Quarkus

Mon, 03/28/2022 - 15:00
Simplify Java persistence implementation with Kotlin on Quarkus Daniel Oh Mon, 03/28/2022 - 03:00 Up Register or Login to like.

For decades, developers have struggled with optimizing persistence layer implementation in terms of storing business data, retrieving relevant data quickly, and—most importantly— simplifying data transaction logic regardless of programming languages.

Fortunately, this challenge triggered the invention of Java ecosystems in which developers can implement the Java Persistence API (JPA). For instance, Hibernate Object Relational Mapper (ORM) with Panache is the standard framework for JPA implementation in the Java ecosystem.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices arc… eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects

Kotlin is a programming language designed to run business applications with multiple programming languages on top of Java Virtual Machine (JVM) for the Java persistence implementation. But there's still the hurdle for Java developers to catch up on the new syntax and JPA APIs of Kotlin.

This article will explain how Quarkus makes it easier for developers to implement Kotlin applications through the Quarkus Hibernate ORM Panache Kotlin extension.

Create a new Hibernate ORM Kotlin project using Quarkus CLI

First, create a new Maven project using the Quarkus command-line tool (CLI). The following command will add Quarkus extensions to enable Hibernate ORM Panache Kotlin and PostgreSQL JDBC extensions:

$ quarkus create app hibernate-kotlin-example \
 -x jdbc-postgresql, hibernate-orm-panache-kotlin

The output should look like this:

...
<[SUCCESS] ✅ quarkus project has been successfully generated in:
--> /Users/danieloh/Downloads/demo/hibernate-kotlin-example
...Create a new entity and repository

Hibernate ORM with Panache enables developers to handle entities with active record patterns with the following benefits:

  • Auto-generation of IDs
  • No need for getters/setters
  • Super useful static methods for access such as listAll() and find()
  • No need for custom queries for basic operations (e.g. Person.find ("name", "daniel"))

Kotlin doesn't support the Hibernate ORM with Panache in the same way Java does. Instead, Quarkus allows developers to bring these capabilities into Kotlin using the companion object, as illustrated below:

@Entity(name = "Person")
class Person : PanacheEntity() {
lateinit var name: String
}

Here is a simple example of how developers can implement PanacheRepository to access the entity:

@ApplicationScoped
class PersonRepository: PanacheRepository<Person> {
fun findByName(name: String) = find("name", name).firstResult()
}

Super simple! Now I'll show you how to implement resources to access data by RESTful APIs.

Create a resource to handle RESTful APIs

Quarkus fully supports context and dependency injection (CDI), which allows developers to inject PersonRepository to access the data (e.g., findByName(name)) in the resources. For example:

@Inject
lateinit var personRepository: PersonRepository

@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{name}")
fun greeting(@PathParam("name") name: String): String {
return "Hello ${personRepository.findByName(name)?.name}"
}Run and test the application

As always, run your Quarkus application using Developer Mode:

$ cd hibernate-kotlin-example
$ quarkus dev

The output should look like this:

...
INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. \
Live Coding activated.
INFO [io.quarkus] (Quarkus Main Thread) \
Installed features: [agroal, cdi, hibernate-orm, \
hibernate-orm-panache-kotlin, jdbc-postgresql, \
kotlin, narayana-jta, resteasy, smallrye-context-propagation, vertx]

--
Tests paused
Press [r] to resume testing, [o] Toggle test output, \
[:] for the terminal, [h] for more options>

Quarkus Dev services stand up a relevant database container automatically when you run a container runtime (e.g., Docker or Podman) and add a database extension. In this example, I already added the jdbc-postgresql extension, so a PostgreSQL container will be running automatically when the Quarkus Dev mode begins. Find the solution in my GitHub repository.

Access the RESTful API (/hello) to retrieve the data by the name parameter. Execute the following curl command line in your local terminal:

& curl localhost:8080/hello/Daniel

The output should look like this:

Hello DanielConclusion

This is a basic explanation of how Quarkus enables developers to simplify JPA implementation using Kotlin programming APIs for reactive Java applications. Developers can also have better developer experiences, such as dev services and live coding, while they keep developing with Kotlin programming in Quarkus. For more information about Quarkus, visit the Quarkus web page.

This article demonstrates how Quarkus enables developers to simplify JPA implementation using Kotlin programming APIs for reactive Java applications.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Java Kubernetes Cloud What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

New book highlights open source tools and tips for personal cybersecurity

Sun, 03/27/2022 - 15:00
New book highlights open source tools and tips for personal cybersecurity Joshua Allen Holm Sun, 03/27/2022 - 03:00 Up Register or Login to like.

The internet can be a dangerous place. Not a week goes by without a cyber attack taking place. Go H*ck Yourself: A Simple Introduction to Cyber Attacks and Defense by Bryson Payne shows you how many basic cyber attacks work, so you can learn to defend against them. Payne teaches how to perform a variety of hacks to show that they are easy to do. 

The book’s eleven chapters begin with straightforward concepts, like using a browser’s inspect tool to make a password field display the password and gaining administrative access to a Windows or Mac using installation media. The third chapter explains how to use VirtualBox to create Kali Linux and Microsoft Windows virtual machines that will be used for the exercises in the following chapters.

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles

Chapters four through ten dig deep into various hacks that can be used against you. Chapter four demonstrates what "Googling yourself" can reveal by entering your name into Google. A hacker could use this information against you. Chapter four also provides tips for writing advanced search queries to help you dig deeper with your searches. Chapter five deals with social engineering and teaches how to create phishing websites. It also shows you how emails are used for potential phishing attempts. Chapter six explains malware and viruses. It also shows how to write a simple virus and take control of a Windows computer. Chapter seven goes into stealing and cracking passwords. Chapter eight details web hacking, including Cross-Site Scripting attacks and SQL Injection attacks. Chapter nine touches on hacking Android mobile devices. Chapter ten concerns hacking automobiles. Each chapter is full of detailed, well-explained exercises that teach you what malicious hackers are trying to do and how to stop them from succeeding in their goals.

The book's final chapter, chapter 11, provides an overview of "ten things you can do right now to protect yourself online." These ten things reiterate the lessons taught throughout the book and are accepted "best practices" advice. Payne points out to the reader that they are a target. They need to be aware of social engineering attempts, and they should follow the advice presented throughout the book to protect themselves best. Chapters one through ten are fascinating and informative, but require some preexisting technical knowledge to get the most out of them. "Go H*ck Yourself" is beginner friendly, but not "I have no idea how a computer works at all" friendly. Chapter 11 is the chapter you can refer to for anyone in your life that needs help, even the relative who repeatedly asks to see if maybe this time that request for money from a seemingly friendly stranger is not a scam.

The book also has two appendixes. The first explains how to create a Windows 10 installation disc or flash drive. The second provides tips for troubleshooting VirtualBox. These two appendixes should help anyone with a modest background in computers figure out how to set up everything so they can perform the exercises in the various chapters.

In a world that has to contend with constant cyber threats, "Go H*ck Yourself" is a necessary read for anyone who spends any time on the internet. Payne's lessons will provide you with the tools needed to defend yourself and your friends and family from malicious hackers who are up to no good. Because the book is a technical book with hands-on exercises, it is not an ideal read for those who most need the lessons the book imparts. Anyone with a basic grasp of tech know-how should pick up this book, read it, apply its lessons in their own life, and share the knowledge they learned.

In a world that has to contend with constant cyber threats, this new book by author Bryson Payne is a necessary read for any open source technologist who spends time on the internet.

Security and privacy What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Balancing transparency as an open source community manager

Fri, 03/25/2022 - 15:00
Balancing transparency as an open source community manager Rich Bowen Fri, 03/25/2022 - 03:00 Up 1 reader likes this

Several weeks ago, my friend and colleague Kashyap Chamarthy posted an essay titled "What makes an effective open-source 'community gardener?'" By community gardener, he means what most of us traditionally call a community manager. I like his choice of terminology, though, as I've written before about how difficult it is even to define what a community manager does, let alone the right thing to call it.

The "gardener" metaphor is good because a community needs nurturing, weeding, watering, light, and so on. However, the implication that it can become overgrown with weeds without a gardener isn't particularly charitable to the community members. Community organizers, liaisons, and leaders all suffer from different problems, too, because the community does a lot of these functions on its own.

Names are hard.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Openness and secrets

Kashyap says in his article, "Don't let your insider advantage seep through into your public communication." Full-time community managers often have access to information before the upstream public community. It's just part of their position inside a sponsoring company. This situation is especially true if a particular company leads, controls, or overwhelmingly dominates projects. This is increasingly the case with large open source projects these days. Often, one is prohibited from sharing insider information with the upstream community for one reason or another, which can be an enormous source of stress.

Imagine being told by a community manager, I know a thing that would make your life easier or, at least, help you to plan your future better, but I'm not allowed to tell you for reasons that I'm probably also not able or allowed to explain to you.

This is simply the nature of working for a company. Some things are secret.

As a (hopefully!) trusted member of the community, it's a difficult balance to strike and frequently gives the community the impression (perhaps justified) that I know something you don't, and am intentionally withholding that information. That is even more problematic once the information is finally revealed, along with the fact that I've known for some time.

On the flip side, if a message is made public too early before it is "polished," you end up with situations where you don't have all of the answers to the questions you know will be asked. These situations could make you look unprofessional, unprepared, and dismissive of the community's concerns.

This is further complicated by the community saying that you should have just had the entire conversation in public to start with, which certainly has merit. But, again, companies have secrets because they have shareholders, intellectual property, lawyers, trade secrets, etc. And there will always be things that are not spoken of "outside."

One of the Red Hat mantras is upstream first, which speaks not only of where to put code (developed in the upstream community first) but where to have conversations (on the public mailing list, forum, chat, etc.). The tension between wanting to do this (and the benefits that derive from that) and the need to keep things embargoed (for reasons of insider trading, security embargoes, and trade secrets) is a constant presence in all companies that deal with open source.

There are many ways to manage, there are fewer ways to garden

A community gardener helps projects and people who have come together for a common purpose flourish. The actions that purpose demands on a day-to-day basis vary depending upon what a community needs. Openness and transparency are required for an open community, although the degree to which one can be fully transparent varies from one company to another. That tension will always be there. Being aware of that tension, and carefully considering it in your external communications, is essential. There are no easy glib answers to what you can and should say, but, rather, always be aware that it's a choice, and make that choice mindfully. No matter what, be open and honest that you cannot communicate everything, and look to cultivate a trusting community with an atmosphere of honesty. Trust that your community accepts that you cannot answer some questions or divulge all the information you have, and you will form healthy and vibrant relationships.

Openness and transparency are required for an open community, although the degree to which one can be fully transparent varies from one company to another.

Image by:

Opensource.com

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to implementing DevSecOps

Thu, 03/24/2022 - 15:01
A guide to implementing DevSecOps Will Kelly Thu, 03/24/2022 - 03:01 Up Register or Login to like.

DevSecOps adoption offers your enterprise improved security, compliance, and even competitive advantages as it faces new threat vectors, a new world of work, and demanding customers. It's only a matter of time before DevSecOps subsumes DevOps because it offers the same core practices but adds a security focus to each phase of the development lifecycle.

In this new eBook, I take a phased approach to DevSecOps transformation. While the eBook targets readers already familiar with DevOps practices, you can still use it to chart your course from a legacy software development life cycle (SDLC) straight to DevSecOps.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles Getting to know DevSecOps

DevSecOps incorporates security in every stage of the cycle while preserving the best qualities of DevOps. It knocks down the silos between your development, security, and operations teams. Benefits of DevSecOps include:

  • Prevention of security incidents before they happen: By integrating DevSecOps within your CI/CD toolchain, you help your teams detect and resolve issues before they occur in production.
  • Faster response to security issues: DevSecOps increases your security focus through continuous assessments while giving you actionable data to make informed decisions about the security posture of apps in development and whether they are ready to enter production.
  • Accelerated feature velocity: DevSecOps teams have the data and tools to mitigate unforeseen risks better.
  • Lower security budget: DevSecOps enables streamlined resources, solutions, and processes, simplifying the development lifecycle.

This eBook breaks down the DevOps and DevSecOps transformation into a framework your enterprise can follow to integrate more security into CI/CD pipelines and the organizational culture.

Embracing the DevOps to DevSecOps transformation

Moving from DevOps to DevSecOps is a fundamental transformation for your entire organization. DevSecOps will change your culture as continuous feedback, team autonomy, and training promote a new way of working for your technical staff.

In fact, you also should account for non-coders such as your sales and marketing teams in your transformation, as DevSecOps provides stakeholders with even more data and reporting than you could offer them with DevOps. For example, a move to DevSecOps enables your salespeople to tell a powerful security and compliance story.

While you may have introduced automation through your DevOps journey, a DevSecOps transformation takes it up a notch. You'll need to bring your culture along with that change. The developers, cybersecurity specialists, and stakeholders will feel the changes from the increased automation that comes from the DevSecOps transformation.

This eBook also walks you through a DevSecOps maturity model that provides another way to chart your organization's journey. Like DevOps, DevSecOps brings a need for collaboration and iteration to continuously improve your tools and processes.

Start your DevSecOps transformation now

Get started on your DevOps to DevSecOps transformation with this new eBook. Face your DevSecOps shift with confidence as your organization's processes mature. In addition to this eBook, Opensource.com has published several informative articles about DevOps and DevSecOps practices that provide additional insights and learning.

Download now: A guide to implementing DevSecOps

This downloadable guide helps you chart a course through your organization's DevOps to DevSecOps transformation.

DevOps Security and privacy What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Making the case for openness as the engine of human progress

Thu, 03/24/2022 - 15:00
Making the case for openness as the engine of human progress Brook Manville Thu, 03/24/2022 - 03:00 Up 1 reader likes this

Opensource.com readers might feel tempted to shrug off Johan Norberg's recent book, Open: The Story of Human Progress, as just one more sermon preached to a choir of devoted believers. But while the sermon offers some familiar themes, this new work deserves serious new attention.

It's an ambitious manifesto, reaching for global scale, arguing that the future progress of the whole world now depends, existentially, on nations' and societies' embrace of open practices. The call is also particularly timely: 1990s-era optimism about burgeoning openness in Western societies is today ceding to a more pessimistic reality. Recent commentators seem to echo George Will, who said September 11, 2001 marked "the end of our holiday from history."

Open Organization resources Download resources Join the community What is an open organization? How open is your organization?

To which Norberg seems to implicitly answer: "And because of that, more than ever, the world now needs more 'open.'"

In this review, I want to offer only a brief outline for Norberg's argument about the importance of openness today. I don't aim to be comprehensive; instead, I wish to summarize only as much of the book as is necessary for explaining the questions I feel it raises—not only for me, but also, I suspect, for anyone interested in the ongoing, global conversation about "open" principles and practices today.

"The power of open"—throughout history and into the future

Norberg's argument is quietly polemical and unfolds slowly. But the winding road he paves allows him to argue repeatedly that adoption of greater "openness" by various communities or states through time—openness to ideas, innovations, and improvements (whether borrowed, traded, or imported through immigration)—best explains why history's "winners" went on to flourish. Similarly, his case discussions counter-argue that when these winners started to retreat from openness (in different ways), their progress slowed or halted.

The second half of the book ("Closed") draws from a range of landmark sociopsychological research, from which Norberg constructs an explanation of why humans, despite a penchant for learning from the good ideas of others, also backslide into competitive and adversarial tribal loyalty. We are, Norberg illustrates, maddeningly capable of both win-win collaboration with others but also zero-sum warfare and mutual destruction, often with those same "others." This central dilemma of human nature, he suggests, must somehow be resolved to establish more openness all around, even though managing openness is more difficult and often imprecise.

His final chapters offer various suggestions for tilting zero-sum advocates away from destructive and closed tribal thinking. Pushing for greater openness all the time, the author suggests, may have short term costs for those who dare, but will ultimately deliver longer-term progress for them—and indeed human civilization more broadly. If today's civilization fails in this, he repeatedly warns, we have only our collective selves to blame for the grinding decline that awaits.

It's always more complicated

This is a thought-provoking book, but embracing its pitch requires several leaps of faith.

The first part of Norberg's thesis—that the lessons of yesterday's civilizations demonstrate that all human progress has come from the energetic pursuit of "open"—launches on soaring wings over historical landscapes inevitably more complicated than Norberg suggests (Ron McFarland notes this, too, in his review of the book). A host of PhD dissertations could test his flurry of propositions about this pivotal point of world history. Each of his case examples might be (and through the years have been) plausibly interpreted with differing explanations.

To highlight just a few:

  • Norberg argues that the floreat of ancient Greece can be linked to its invention of "open" rationality and science, coupled with inter-city state exchange and debate. But the Greek poleis were by no means all democracies or given to such values, and much of their history was defined by destructive (i.e., zero-sum) wars with one another. And even the Greek jewel of openness—5th C. BCE Athens, in the age of Pericles—was also famous for progressively tightening its citizenship during the same era, and imposing multiple barriers against growing immigration. It was less open than contemporary speeches might suggest.
  • Turning to Rome, Norberg stresses that the power and success of its empire stemmed from ongoing acquisition (via trade and conquest) and then application of the ideas of other peoples from across its domains. But he glosses over the notion that Rome's public-spirited culture and citizenship of Res Publica developed in opposition to its early enemies.
  • Next to Britain: Norberg links the country's 18th-19th C. industrial revolution to adaptation of Dutch financial innovations and the creativity of immigrant Jews and Huguenots, but it also gives short shrift to the formative dynamic of its centuries-long wars with, and efforts to distinguish the nation from, France and its more statist approach to the economy.
  • And finally to America, which Norberg argues owes much to its early embrace of immigration, freedom of religion, and intercultural exchange, though he tends to downplay the country's very non-open approach to embedding slavery in its founding.

None of these objections should destroy belief in the value of open innovation for creating progress in a given civilization (or nation, or organization). But we should always remember that events are more complicated than our stories often attest.

An equation with two unknowns

So greater openness leads to greater progress, says Norberg. And yet a statement this simple becomes more complicated when readers reflect on precisely how the author defines both "open" and "progress." Students of algebra know the fundamental conundrum of solving an equation with two unknowns. Norberg's book rests upon a similar conundrum.

Let's unpack it. First, how exactly are we to understand progress?

Norberg might reasonably claim that traditionally-perceived "great" civilizations he explores—e.g., classical Greece (6th-4thth C. BCE), ancient Rome empire (31 BCE-476 AD), China's Song dynasty (960-1279 AD), Britain in the age of Industrial Revolution (18th-19th C.)—are self-evident laudable embodiments of what "progress" is. But in also applauding the open innovation practices of the 13th C. Mongolian king, Genghis Kahn, Norberg seems to imply that we should similarly admire the legacies of a leader known for his armies' terrifying conquests and passion for revenge (but in fairness, also for certain accomplishments in statecraft, Eurasian commerce, and religious tolerance). On the other side of the coin, Norberg paints the Catholic Church as a historical force of progress-destroying hierarchy and closure. But he also ignores its role in unifying much of Western civilization and preserving so much of its cultural legacy for subsequent generations.

Students of algebra know the fundamental conundrum of solving an equation with two unknowns. Norberg's book rests upon a similar conundrum.

We should acknowledge, of course, that all the civilizations Norberg features in Open had both inspiring and repugnant features, and even those judged on balance today as "morally bad" may have contributed certain things, born of cross-boundary innovation, to the "greater good" of later humankind. So in celebrating the contributions of "open," then, how should we define and judge the nature of progress per se? Norberg fails to firmly assert exactly what his "pursuit of open" actually aims for. In the simplest terms, it seems to be whatever (supposedly well-understood) value for the world can be seen in this or that legendary civilization of the past.

And Norberg's treatment of "open" is similarly ambiguous. But at least this concept becomes a little clearer as the book unfolds.

Norberg never offers a concise, summary definition of his book's eponymous concept; instead, he plots plenty of dots that readers must connect into a general outline of what "open" means to him. Overall, by my own reading of his story, "open" signifies communities, organizations, or societies that are:

  • inviting of new ideas from others, gathered or created as a result of trade, exchange, cross-boundary collaboration or arrival and integration of newcomers, and 
  • sustained by tolerance of diversity and dissent, inclusiveness, free-flowing debate, large (or at least selected) degrees of individual liberty, and avoidance (if not prohibition against) tribal enmity and destructive rivalry based on group identities or fear-inducing hierarchies of power

For Norberg, an all-important concept captured in a single word sits atop a host of interconnected conditions and attributes.

What gets in the way

Norberg's acute insights into individual and group behaviors make the second part of the book more distinctive than the first.

The author outlines the dilemma of human collaboration, the successes of which can so quickly turn into suspicions of others, and the failures of which can create fear and enmity. He demonstrates how our competitiveness and zeal for affiliation encourages us to continuously divide the world into "us" and "them," even when "they" have good ideas that we freely borrow and benefit from. Norberg similarly explains how our laudable desire to win too often forces us to push for zero-sum victory, when objective assessment regularly shows that win-win partnerships with would-be opponents deliver more value for all. 

In another illuminating section, the author explains why humans tend to falsely romanticize bygone days—because we conveniently minimize past problems and exaggerate new looming challenges, thus steering us away from future opportunities and new sources of potential innovation. Comparable research also shows that, in times of threat or instability, a powerful part of our brain starts to crave the security of control—and then we trade personal liberty and respect for others for the controlling and often abusive protection of powerful hierarchical leaders.

All such impulses make us less trusting of—and less willing to invest or take risks in—"open."

Open makes lively reading and is rich with insights about the human foibles and behaviors that so often hinder human progress, even if it disappoints when it leaves so much unexplored. Closing time

Regrettably, Norberg's shrewd analysis of why humans so easily abandon opening up doesn't culminate with many concrete suggestions for restraining or converting our closed-leaning impulses and behaviors. 

The ideas and principles he advances are all reasonable (some research-based, others reflective of the author's personal experiences), but they only hint at any kind of scalable institutional transformation. Here are some highlights:

  • Norberg vaguely asserts that societies or other entities aspiring to enduring openness must build cross-cutting identities to break down tribal enmities; encourage their member's empathy for "others" of "out-groups" by constructive use of literature, art and mass communications; and expand trade to build ideas of "mutual usefulness" among nations, instead of war (which motivated formation of the European Union after World War II).
  • Norberg similarly attacks the costs of "zero-sum" economics and the thinking underpinning it, describing how and why (over time) zero-sum policies have created more closed societies. But here again, he offers little concrete suggestions for scalable prevention.
  • Norberg critiques the (misguided, in his view) embrace of nostalgia for bygone better times (which were really not better, as he shows), calling instead for new systems that increase people's awareness of future threats (e.g., global warming) and provide incentives for populations to volunteer their best ideas for how to deal with them.

So in the end, while Norberg's historically informed vision for a more open future is bold, his practical suggestions for bringing about that future and translating it to today's world are decidedly less so. Open makes lively reading and is rich with insights about the human foibles and behaviors that so often hinder human progress, even if it disappoints when it leaves so much unexplored. Anyone working through its pages will be forced to think more deeply about what is surely a major—even if not the only—explanation for the progress and success of nations and civilizations.

And it will also push every reader to ponder what can be done today, to promote more openness towards such desirable ends. In the second part of this review, I will explore some of the future-facing questions it raised for me.

Is "open" the future of human progress, as this recent book argues? Maybe. To know for sure, we'll need to clearly define it—and the purposes it serves.

Image by:

Opensource.com

The Open Organization Read this next Open exchange, open doors, open minds: A recipe for global progress This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

New Trailblazers Fellowships power open hardware in academia

Wed, 03/23/2022 - 15:00
New Trailblazers Fellowships power open hardware in academia Joshua Pearce Wed, 03/23/2022 - 03:00 Up 1 reader likes this

Witness the rise of open hardware! Last year, I wrote about how now is the time to start seriously thinking about a career in open source. Next, the UN backed open source, and earlier this year, the U.S. National Science Foundation began major funding for open source platform development. Now the Open Source Hardware Association (OSHWA) has teamed up with the Sloan Foundation to fund Open Hardware Trailblazers Fellowships.

These Fellowships last one year and pay $50,000 or $100,000 to individuals to tackle some of the latest issues for integrating open hardware deep into academia. They want to connect a peer cohort of academic leaders pushing open hardware into academia and create a library of resources representing best practices in open source hardware in academia. My own work has shown that academics are willing to embrace open source with open arms. We surveyed American academics and found a supermajority (86.7%) of faculty respondents indicated a willingness to accept open source-endowed professorships.

Open hardware is already going strong in academia, with three journals dedicated to publishing on the topic: HardwareX, the Journal of Open Hardware, and The Journal of Open Engineering. In addition, many other publications like Designs and PLOS One frequently publish open hardware content. The Gathering for Open Science Hardware (GOSH) continues to bring like minds together, and open hardware's use in academia has become an area of study in and of itself. Notably, of course, open hardware decimates proprietary offerings on economic costs. As a general rule of thumb, subtract 90% from the price tag. There is a lot more money in the system to fund open hardware in academia as funders have noticed 100-1000 percent or more for the return on investment (ROI) of open source development after only a few months.

Explore open hardware What is open hardware? What is Raspberry Pi? What is an Arduino? Our latest open hardware articles

Sadly, many academic institutions are mired in the intellectual property dark ages and are not yet aware of open hardware techniques, and do not actively support their adoption. OSHWA wants to change that and hopes their Fellowship program will pave the way for open hardware in academia by setting up a network of advocates.

The Open Hardware Trailblazers Network is designed to:

  1. Recognize existing leaders
  2. Give those leaders tools to expand their work
  3. Encourage the leaders' institutions to recognize and value their work
  4. Identify and accelerate the development and dissemination of information about developing open hardware within the context of universities
  5. Leverage diversity, equity, inclusion, and justice initiatives to broaden the community of open hardware practitioners at universities
  6. Pair leaders with industry mentors to share knowledge when applicable

All Fellows will attend regular virtual meetings with their Fellowship cohort, including two in-person meetings with travel costs paid for by OSHWA. Fellows will be introduced to mentors or collaborators from the industry with relevant expertise. The Fellowship will build a beneficial network to share work being done, ask questions, and gain feedback from each other.

Example questions that may be answered as part of the Fellowship include, but are not restricted to:

  • What documentation practices help academics share and disseminate their open hardware projects?
  • What makes hardware more replicable in academia, and what is missing from current documentation standards?
  • How do various fields of study approach problems such as licensing around open hardware in their departments, and what are common threads seen at other academic institutions?
  • What is the business case for open hardware in academia, and how has open hardware developed in academia thus far?

That last one is particularly interesting to me, so drop me a line if you would like to collaborate on this in or outside of the Fellowship.

If you are in the U.S. and interested in one of the eight Fellowships, check out the Request for Proposals here!

Good luck!

The Open Source Hardware Association (OSHWA) has teamed up with the Sloan Foundation to fund Open Hardware Trailblazers Fellowships.

Image by:

opensource.com

Hardware Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How our community uses Zulip for its open source chat tool

Wed, 03/23/2022 - 15:00
How our community uses Zulip for its open source chat tool Tim Erickson Wed, 03/23/2022 - 03:00 Up 1 reader likes this

When Backdrop CMS needed to upgrade our real-time chat platform, we had to balance ease of use with our preference for open source. Those criteria led us to Zulip, an open source chat and collaboration platform with many features we were looking for.

In this article, I'll explain our selection process and how we've implemented and adapted this tool across our organization. Maybe it's the right tool for your organization as well.

How our community outgrew our initial chat solution

Backdrop CMS is a fork of the Drupal project targeting small- to medium-sized businesses, non-profits, educational institutions, and companies or organizations who need a comprehensive website for a reasonable price.

For the first five or so years of the project, we used Gitter as our real-time chat platform. Gitter served us well during this time and had the following advantages:

  • It's open source.
  • It's easy to use.
  • You can use a GitLab or GitHub account to log in.
  • It is transparent and viewable without an account.

The Backdrop CMS project is now a little over seven years old. As users became more familiar with advanced chat tools like Slack, Gitter seemed increasingly frustrating for some of our most regular participants. Gitter did not provide channels or threads for organizing conversations, and the mobile app was very glitchy. We began to look for alternatives.

As the Backdrop CMS community began to research options, we struggled with our preference to use open source tools against a competing desire to reduce barriers to entry, especially for nontechnical users. Slack's familiarity gave it some appeal: it was a very popular platform, and most of us were already using it for other projects or jobs. These factors made the barriers to entry quite low. However, our tight budget and Slack's restrictions on free accounts were serious strikes against it.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Why we chose Zulip

We looked closely at several open source alternatives to Slack and pretty quickly settled on Zulip as our leading contender. We didn't have the budget for a paid hosting plan and were reluctant to take on the overhead of maintaining our own Zulip server. Still, the availability of free hosting for open source projects pushed us further in this direction. A current look at the Zulip website suggests that sponsorships may be available upon request for "worthy organizations" beyond other open source projects.

The most innovative feature in Zulip is the ability to create topics (or tags) within a stream—the Zulip equivalent of a channel. This makes it possible to view incoming messages in a single chronological stream, as we were used to in Gitter. However, users can also tag each message with a topic and filter them in order to view all the individual messages belonging to one topic independently.

For some of us, these topics were a powerful feature. Others found them confusing and difficult to use. While putting your message into a specific topic is optional, some new users felt pressure to use topics. The interface for finding or selecting topics does take some time to get used to.

 

In the early days of using Zulip, these topics were very informal. Now we find that they are becoming increasingly important as an organizational tool. Still, they remain the most significant frustration for new and even some experienced users.

Both iPhone and Android apps are available, and both seem to be working quite reliably for our community members.

How we've evolved with Zulip

As time goes on, we're seeing more and more of our support conversations move from our public online forum to the Zulip chat channel, which worries us, because information in the chat channel is far less accessible and public for those who are not using Zulip. This problem might be a side effect of how successful Zulip has become for real-time conversations.

To provide some context for the size and level of activity in our Zulip community, we have just over 240 accounts. On a recent Friday, over 13 different people posted a total of 75 messages. Some days are busier than this, and some are slower, but days like this feel pretty typical of late. Depending upon the time of the day, support questions in Zulip usually get some level of response within an hour or, on a slow day, 5 to 6 hours.

Until now, we've tried to keep most of the conversation in a single stream, with a few exceptions. The growth or success of any online community can be hampered by creating too many channels too early, none of which have sufficient activity. We now have a German language stream, an "off-topic" stream (our version of the water cooler), and specialty streams for events and infrastructure. We also have private streams for our leadership and one for security issues. We may be reaching a point where additional streams would be useful.

We know that not everyone in our community likes Zulip, but complaints are few. Of course, we don't know what we don't know—some people might have tried Zulip, grown frustrated, and never come back to tell us about it. On the whole, those of us most active in the community are happy with Zulip and would recommend it to other open source projects.

The Backdrop CMS community's search for a new collaboration and chat platform led to this open source tool.

Tools Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get started with reactive programming with Kotlin on Quarkus

Tue, 03/22/2022 - 15:00
Get started with reactive programming with Kotlin on Quarkus Daniel Oh Tue, 03/22/2022 - 03:00 Up Register or Login to like.

Moving to the cloud with event-driven architecture raises big concerns for enterprises using multiple programming languages such as Java, C#, JavaScript, Scala, and Groovy to implement business requirements. Because enterprises need to redesign multiple architectures for container deployment separately and put more effort into optimizing production on the cloud, developers often must learn a new programming language in line with the production environment. For example, Java developers have to switch their skill sets to Node.Js to develop lightweight event-front applications.

Kotlin addresses these issues and targets various developers who deploy business applications with multiple programming languages on top of Java Virtual Machine (JVM). Kotlin handles these issues with both imperative and reactive approaches. However, there's still a hustle to catch up on Kotlin's new syntax and APIs, especially for Java developers. Luckily, the Quarkus Kotlin extension makes it easier for developers to implement Kotlin applications.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices arc… eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects Create a new Kotlin project using Quarkus CLI

I'll create a new maven project using the Quarkus command line as an example. The following command adds Quarkus extensions to enable RESTEasy Reactive, Jackson, and Kotlin extensions:

$ quarkus create app reactive-kotlin-example -x kotlin,resteasy-reactive-jackson

The output should look like this:

...
[SUCCESS] ✅  quarkus project has been successfully generated in:
--> /Users/danieloh/Downloads/demo/reactive-kotlin-example
...

Next, I'll use Quarkus Dev Mode, which enables live coding to resolve the performance issue in the inner loop development. It simplifies the development workflow from writing code to accessing the endpoint or refreshing a web browser without recompiling and redeploying cycle. Run the following commands:

$ cd reactive-kotlin-example
$ quarkus dev

The output should look like this:

...
INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kotlin, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, vertx]

--
Tests paused
Press [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>Make Kotlin behave the Quarkus way

Kotiln provides a coroutine to run a block of code concurrently, similar to a thread in Java. The coroutine can be suspended in one thread then resume in another thread. Quarkus enables developers to compose suspending functions.

Open the ReactiveGreetingResource.kt file in src/main/kotlin/org/acme directory to replace the hello() method with the following code:

@GET
@Produces(MediaType.TEXT_PLAIN)
suspend fun hello() = "Hello RESTEasy Reactive by Kotlin Suspend function"

Note: This resource file is generated automatically while you create a new Kotiln project using the Quarkus CLI.

Make sure to access the RESTful API (/hello) if the new suspend function works in the Quarkus development environment. Execute the following curl command line in your local terminal, or you can also access the endpoint URL using a web browser:

& curl localhost:8080/hello

The output should look like this:

Hello RESTEasy Reactive by Kotlin Suspend function

Great! It works well. Now I'll enable Java's Context and Dependency Injection (CDI) capability in the Kotlin application.

Enable CDI injection in the Kotlin application

Reflections and annotations in Kotlin are different from how Java initializes properties. It probably causes developers to have an issue (e.g., UninitializedPropertyAccessException). Before enabling CDI injection in code, create a new GreetingService.kt service file in the src/main/kotlin/org/acme directory:

@ApplicationScoped
class GreetingService {

    fun greeting(name: String): String {
        return "Welcome Kotlin in Quarkus, $name"
    }

}

Go back to the ReactiveGreetingResource.kt file. Add the following code to use @Inject annotation for adopting Kotlin annotation and reflection by @field: Default:

@Inject
@field: Default
lateinit var service: GreetingService

@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("/{name}")
fun greeting(@PathParam("name") name: String): String {
    return service.greeting(name)
}

Access the new endpoint (/hello/{name}) if the CDI injection works. Execute the following curl command in the local terminal, or access the endpoint URL using a web browser:

& curl localhost:8080/hello/Daniel

The output should look like this:

Welcome Kotlin in Quarkus, DanielWrap up

You learned how Quarkus enables developers to keep using Kotlin programming APIs for reactive Java applications. Developers benefit from features such as dev services and live coding. They also increase the performance for the cloud environment deployment through a native executable. In another article, I'll show how to develop data transaction features using Kotlin with Quarkus.

Learn how Quarkus enables developers to keep using Kotlin programming APIs for reactive Java applications.

Image by:

Opensource.com. CC BY-SA 4.0

Java Cloud Programming Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My favorite Linux top command options

Mon, 03/21/2022 - 15:00
My favorite Linux top command options Don Watkins Mon, 03/21/2022 - 03:00 Up Register or Login to like.

When I am checking out Linux systems (or even troubleshooting computers running other operating systems), I frequently use the top command to check out the system's RAM and CPU utilization. It provides me with information to assess the computer's overall health. I learned about the top command early in my Linux journey and have relied on it to give me a quick overview of what is happening on servers or other Linux systems, including Raspberry Pi. According to its man page, the top program provides a dynamic real-time view of a running system. It can display system summary information as well as a list of processes or threads currently being managed by the Linux kernel.

A quick overview is often all I need to determine what is going on with the system in question. But there is so much more to the top command than meets the eye. Specific features of your top command may vary depending on whose version (procps-ng, Busybox, BSD) you run, so consult the man page for details.

To launch top, type it into your terminal:

$ top

Running processes are displayed below the table heading on the top screen, and system statistics are shown above it.

Top 05:31:09 up 55 min,3 users,load average: 0.54, 0.38, 0.46
Tasks: 469 total, 1 running, 468 sleeping,  0 stopped, 0 zombie
%Cpu(s): 1.0 us, 0.4 sy, 0.0 ni, 98.6 id, 0.1 wa, 0.0 hi,0.0 si,0.0 st
MiB Mem : 32116.1 total,  20256.5 free, 6376.3 used, 5483.3 buff/cache
MiB Swap: 0.0 total,  0.0 free,      0.0 used.  25111.4 avail Mem  

 PID USER  PR NI   VIRT    RES   SHR S %CPU %MEM    TIME+ COMMAND                                                
2566 don   20  0  11.9g 701300 78848 S  3.3  2.1  2:03.80 firefox-bin
1606 don   20  0  24.2g  88084  4512 S  2.0  0.3  0:39.59 elisa
1989 don   20  0 894236 201580 23536 S  2.0  0.6  0:46.12 stopgo-java
5483 don   20  0  24.5g 239200 20868 S  1.3  0.7  0:26.54 Isolated Web Co
5726 don   20  0 977252 228012 44472 S  1.3  0.7  0:41.25 pulseaudio

Press the Z key to change the color of the output. I find this makes the output a little easier on the eyes.

Press the 1 key to see a graphical representation of each CPU core on the system. Press 1 repeatedly to assess core statistics for your CPU cores.

You can display memory usage graphically by invoking the top command and then pressing the m key.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Useful top options

If you're looking only for the processes started by a specific user, you can get that information with the -u option:

$ top -u 'username'

To get a list of idle processes on your system, use the -i option:

$ top -i

You can set the update interval to an arbitrary value in seconds. The default value is three seconds. Change it to five like this:

$ top -d 5

You can also run top on a timer. For instance, the following command sets the number of iterations to two and then exits:

$ top -n 2Locate a process with top

Press Shift+L to locate a process by name. This creates a prompt just above the bold table header line. Type in the name of the process you're looking for and then press Enter or Return to see the instances of that process highlighted in the newly sorted process list.

Stopping a process with top

You can stop or "kill" a running process with top, too. First, find the process you want to stop using either Shift+L or pgrep. Next, press K and enter the process ID you want to stop. The default value is whatever is at the top of the list, so be sure to enter the PID you want to stop before pressing Enter, or you may stop a process you didn't intend to.

Top top

There are many iterations of the top command, including htop, atop, btop, and ttop. There are specialized top commands, too, like powertop for power usage and ntop for networks. What's your favorite top?

A quick overview is often all I need to determine what is going on with my Linux system. But there is so much more to the top command than meets the eye.

Image by:

Opensource.com

Linux Command line What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Upstream first or the path lifting property of a covering space

Mon, 03/21/2022 - 15:00
Upstream first or the path lifting property of a covering space Aleksandra Fedorova Mon, 03/21/2022 - 03:00 Up Register or Login to like.

Don't be scared; when you reach the end of this article, you will understand its title.

Understand spaces

Geometry and Topology is an area of mathematics that deals with spaces, usually topological, and various additional structures you can define on them.

How do you turn a set of elements into a list? "You use the list() function!" say my Pythonic friends. But essentially, you add the idea of an order. You take the set and explain which point in that set is next. Similarly, spaces are sets of elements with these additional concepts:

  • For a topological space, add the idea of a neighborhood. You explain how to tell that two points are close.
  • For a metric space, add the idea of a distance. You explain how to measure the distance between any two points of the set.

There are many types of spaces in mathematics, from a well-known Euclidean space to a less known but not less common differentiable manifold all the way to exotic structures invented moments ago. And the research in this area circles around finding relationships between different structures and determining whether they define the same basic object.

For example, it is quite easy to prove that a metric space is always a topological space. But it is a bit harder to show that there are topological spaces that can not be expressed in terms of distance.

However, for the casual weekend topologists out there, I recommend two-dimensional surfaces, for example, the good old sphere, torus, or a Mobius strip. Not as square and flat as the Euclidean plane, but still manageable. And the classification theory of two-dimensional compact manifolds is a lot of fun.

Covering space

Take a spring coil and look at it from the top. You will see a circle.

 

Expressing this fact in mathematical language, you would say that the coil itself is the total space. In this case, on its own, it is just a one-dimensional line. It is made up of various components:

  • The circle you see is a base space.
  • The line of sight defines the covering map.
  • If you had a laser sight, the points where the laser would cut the coil would be called a fiber.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

And altogether, total space, base space, and covering map define the covering space or a covering of a circle by a line with discrete fiber.

Homework: Can a circle cover a line?

The critical quality of the covering space, which makes it different from just any projection, is that the neighborhood of each fiber point in the total space works very much the same as the neighborhood of its projection in the base space. This allows you to reason about specific properties of a total space using the pre-existing knowledge of a base space and vice versa.

And mathematicians just love to move between spaces in this way. Once you hit a certain roadblock with the space you are currently researching, you craft a clever mapping of the space onto familiar grounds. You prove something there and then transfer the result back into the original space, where it can lead to new and exciting breakthroughs.

Path lifting property

What does it all have to do with upstream first?, you may ask. (We are yet in the early stages of developing a proper mathematical apparatus for this theory. Feel free to add your suggestions and corrections in the comments section below.)

Look at how code is delivered to an enterprise-level Linux distribution, for example, CentOS Stream. There is an open source project and community which develops a specific version of a piece of software, for example, Firefox. We call such a project upstream. Once the upstream project releases a Firefox version, it gets packaged to Fedora. And then someday, the new CentOS Stream version is bootstrapped using the content of the Fedora package, which contains a specific version of Firefox from the upstream project.

When the upstream project releases a critical update of Firefox, the update is packaged and released in Fedora. But it is also packaged and released via CentOS Stream.

The FOSS space

Consider the set of all patches to all upstream projects, Fedora, and CentOS Stream. Obviously*, this is a topological space, where the Git history defines the neighborhood of a patch.

(*) Sometimes mathematicians use the word obviously to hide the fact that they can not really explain the concept in detail.

This space is not easy to grasp, and while you can connect some of the dots with a path, its global properties are yet to be explored.

The covering map

Take any commit in the FOSS space and map it to a commit in CentOS Stream, which implements the same functionality or fixes the same issue. For a regular commit in CentOS Stream, the map is trivial. The map points to its downstream version for any upstream change or a change in Fedora. Therefore, FOSS space can be represented as a covering space with CentOS Stream as its base.

Lifting a path

The path lifting property tells you that for every path in the base (a change in CentOS Stream), there should be a lifted path in the layers (a change in Fedora and in the upstream project, which maps onto it). In other words, it represents the same bugfix.

And the upstream first principle tells you to build that path.

Wrap up

Applying mathematical concepts to FOSS deployments provides new and interesting ways of understanding the interrelated nature of open source software. I hope this has been an insightful and interesting foray into space and the upstream first principle.

Applying mathematical concepts to FOSS deployments provides new and interesting ways of understanding the interrelated nature of open source software.

Image by:

João Trindade. Modified by Jason Baker. CC BY-SA 2.0.

Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages