opensource.com

Subscribe to opensource.com feed
Updated: 2 hours 9 min ago

How collaborative commons and open organization principles align

Thu, 05/12/2022 - 15:00
How collaborative commons and open organization principles align Ron McFarland Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

I have read Jeremy Rifkin's book The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, which has a strong connection to open organization principles, particularly community building. Rifkin also writes about the future of green energy generation and energy use in logistics. This is the first of three articles in this series. In this article I'll talk about collaborative commons. In the next, I'll talk about its impact on energy production and supply. In the last article, I will look at other economic systems like logistics.

Rifkin believes that the capitalist economy is slowly passing, and the collaborative commons is increasing in importance in the global economy resulting in a part capitalist market and part collaborative commons (like Open Organization Communities). Within these collaborative commons are "social impact-focused organizations" that Laura Hilliger and Heather Leson wrote about in their articles on these organizations. Rifkin thinks these commons are finding synergies where they can add value to one another, while benefiting themselves. At other times, they are deeply adversarial, each attempting to absorb or replace the other.

Rifkin feels that the organizational top-down, centralized capitalist system for the day-to-day commercial, social, and political life of society, that has lasted over more than ten generations, has peaked and begun its slow decline. Capitalist systems will remain part of the social order for at least the next half century, but the collaborative commons will ascend and play a major role by 2050 around most of the globe.

A changing supply environment

Competition will improve productivity, drive down prices and costs to the point where there are "near zero" marginal costs. The cost of actually producing each additional unit after initial fixed costs (purchasing equipment, technology and all start-up expenses) brings the cost to near zero. This causes the production, and making of the product to be nearly free. I have always called "marginal costs" a variable cost. I looked both up, and they are very similar, but the calculation is only slightly different. The impact is the same.

In products that achieve near zero marginal costs, profits (the lifeblood of capitalism) will dry up. That is, if you consider profits as the only motivating factor to supply the product. In a market-exchange economy, profits are made through the gap between cost (variable and fixed) and selling price. Without that gap, there is no financial market. Industries like publishing, communications, camera film, entertainment have seen that gap disappear.

In these industries that have very little gap between costs and selling price, the collaborative network (commons, community, association, or cooperative) comes to life. They serve their community for other reasons than just making a profit (offering value, solving local problems). There is never 100% of one and 0% of the other, but these collaborative networks have a higher share of society giving over receiving and profiting. I think this is the same with Laura Hilliger's and Heather Leson's social impact-focused organizations that I mentioned above.

Zero marginal cost is impacting many for-profit industries, particularly renewable energy, information gathering and computing power, 3D printing, manufacturing, online higher education, and money transfers. They are becoming "prosumers", producing, consuming, and sharing the rest.

Product by product, industry by industry, service by service, while up-front costs (fixed, initial costs, and investment) are still high, they are coming down so much that individuals, creative commons, communities, and cooperatives can invest, not just large corporations or governments.

From this point on, marginal cost reduction is entering into physical goods and services, not just the information economy. There will be more give away items that will draw people to other items that can be purchased as well.

As society moves closer to a near zero marginal cost society, capitalism will be less dominant than today. Rifkin says we will move to a society of abundance over scarcity, a society where most things can be freely shared without concern for getting a return on investment for supplying the goods.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization? Changing economic paradigm

The assumptions regarding the best way to supply goods and services has to change if the marginal cost goes down to near zero. That demand is still there, but the cost of supplying it is near zero and the supply far exceeds the quantity needed or demanded. The mindset should not be on profiting (marketing benefit) for the supply of the item, but more on just the joy of providing it (social benefit). This is aligned with the social impact-focused organizations that Laura Hilliger and Heather Leson wrote about.

The capitalist model is under siege on two fronts:

  1. Interdisciplinary scholarship: like ecological science, chemistry, biology, engineering, architecture, urban planning, and information technology are all adding new concerns to business models, because many are external to the basic equipment and labor cost model. Other environmental factors are coming into play.

  2. New information technology platforms: are weakening centralized control of major heavy industries. The coming together of the communication internet with the fledgling energy (producing, sharing, consuming) internet and the logistics (moving, storing, sharing) internet in a seamless 21st century intelligent infrastructure (IoT) is giving rise to a new industrial revolution. An economy based on scarcity is slowly giving way to an economy of abundance.

According to Rifkin, the IoT will connect everything with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and waste analysis will all be linked by sensors, cameras, monitors, robots, and software with advanced analytics to make determinations. This will make many items to go down to near zero marginal costs. Researchers are looking at this now, like the Internet of Things European Research Cluster. Their "Discover" journal series is committed to providing a streamlined submission process, rapid review and publication, and a high level of author service at every stage. It is an open access, community-focussed journal, publishing research from across all fields relevant to the IoT. It provides cutting-edge and state-of-art research findings to researchers, academicians, students, and engineers. Europe is also studying this as well.

Smart cities are those that build structural health sensors, as well as noise pollution sensors, parking space availability sensors, and sensors in garbage cans to optimize waste collection. There will be sensors in vehicles to gather useful information to reduce travel risks and insurance rates. Sensors in forests will determine the chance of fire. There will be sensors in farm soil, on animals to determine migration trails, in rivers to determine quality of water, sensors on produce to track whereabouts and sniff spoilage, sensors in humans to monitor bodily functions (heart rate, body temperature, skin coloration), and security systems to reduce crime. Many companies are developing these systems, like General Electric's "Industrial Internet", Cisco's "Internet of Everything", IBM's "Smart Planet" and Siemens' "Sustainable Cities".

All these companies are connecting neighborhoods, cities, regions, and continents in what is called "a global neural network" designed to be open to all, distributed, and collaborative allowing anyone, anywhere to tap into Big Data.

Rifkin writes that these systems will marshal resources, production systems, distribution systems, and recycling of waste. Without communication, economic activities cannot be managed. Without energy, information can't be generated and transportation can't be powered. Without logistics, economic activity can't be moved across a supply chain.

The commons existed before capitalist markets or representative government. The contemporary commons are where billions of people engage in the deeply social aspects of life, like charities, religious bodies, arts and cultural groups, educational foundations (schools), amateur sports clubs, producers and consumer cooperatives, credit unions, health-care organizations (hospitals), crowdfunding communities, advocacy groups, and condominium associations.

Notice these are all community based, and have many open organization community principles. In all these organizations, all members are partly owners, managers, workers, and customers (users). There are no salvos between them and their goals are more aligned. The needs of the users must be the strongest of all as that is the greatest community purpose.

Up until now, social commons have been considered the third sector, behind markets and governments. But as time goes on, Rifkin thinks it may grow in importance, as required capital investments will come down to a level that local communities can handle.

While capitalist markets are based mainly on self-interest and driven by material gains, the social commons are motivated by collaborative interests and driven by a deep desire to connect with others and share (open-source, innovation, transparency, and community).

Rifkin writes that the IoT is the technical match for the emergence of the collaborative commons, as it is configured to be distributed, peer-to-peer in nature in order to facilitate collaboration, universal access and sharing, inclusion, and the search for synergies. It is moving from sales markets to social networks, from things owned to things utilized, from individual interests to collaborative interests, and from dreams of going from rags to riches to dreams of a sustainable quality life for all.

GDP and social value measurements

All the value of sharing in communities will impact the GDP, as their value is not economically measured. Therefore, new measurements are required to include educational growth, healthcare, infant mortality, life expectancy, environmental stewardship, human rights, democratic participation, volunteerism, leisure time, poverty, and equitable distribution of wealth.

New kind of incentives

Rifkin thinks that the democratization of innovation and creativity on the emerging collaborative commons is spawning a new kind of incentive, based less on the expectation of financial reward and more on the desire to advance the social well-being of humanity. The collaborative effort will result in expanded human participation and creativity across society and flatten the way we organize institutions (like social impact-focused organizations).

Energy and social impact-focused organizations

I'll talk more about this in the second part of this series, but top-down command and control of fossil fuels are only found in certain places and require centralized management to move them and are very capital intensive. Distributed energies are now leading to local empowerment through the development of collaborative commons. These laterally scaled communities will start to break up vertically integrated companies and monopolies.

These distributed renewable energies have to be organized collaboratively and shared peer-to-peer across communities and regions to create sufficient lateral economies of scale to bring their marginal cost to zero for everyone in society.

The beginning of capitalism, central, top-down control, and massive investments

Whether a society is communist, socialist or capitalist, in the past in order for industrial revolutions to advance economic development, massive investment was required on a centralized, vertical top-down structure.

According to Rifkin, in the next industrial revolution that is starting, those massive costs start coming down so local cooperatives can invest, manage, and control their economic development. Initial investments can be financed by hundreds of millions of individual peer-to-peer networks which will be doable for everyone. But it must be for goods that have marginal (variable) costs of generating, storing, sharing communications and energy at nearly zero. They are smart public infrastructures. These infrastructures will be laterally integrated networks on the collaborative commons, rather than vertically integrated businesses in the capitalist market. They will be social enterprises (open organizations) connected to the IoT. They will utilize an open, distributed, and collaborative architect to create peer-to-peer lateral economies of scale that eliminate virtually all the remaining middlemen. It will be the start of the production and distribution of very close to free goods.

There is a changing supply environment and the world will have to adjust to it. In one area, organizational models will have to change. Furthermore, new ways of thinking and incentivizing have to be developed. In the next part of this series, I'll take a look at energy, education, and other expenses in more detail regarding near zero marginal cost and the communities they develop. Much of our current energy and other costs are moving in that direction.

In his book, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, Jeremy Rifkin explores the rise of collaborative commons in the global economy.

Image by:

Opensource.com

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get started with Bareos, an open source client-server backup solution

Thu, 05/12/2022 - 15:00
Get started with Bareos, an open source client-server backup solution Heike Jurzik Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

Bareos (Backup Archiving Recovery Open Sourced) is a distributed open source backup solution (licensed under AGPLv3) that preserves, archives, and recovers data from all major operating systems.

Bareos has been around since 2010 and is (mainly) developed by the company Bareos GmbH & Co. KG, based in Cologne, Germany. The vendor not only provides further development as open source software but also offers subscriptions, professional support, development, and consulting. This article introduces Bareos, its services, and basic backup concepts. It also describes where to get ready-built packages and how to join the Bareos community.

Modular design

Bareos consists of several services and applications which communicate securely over the network: the Bareos Director (Dir), one or more Storage Daemons (SD), and File Daemons (FD) installed on the client machines to be backed up. This modular design makes Bareos flexible and scalable—it's up to you whether to install all components on one system or several hundred computers, even in different locations. The client-server software stores backups on all kinds of physical and virtual storage (HDD/SSD/SDS), tape libraries, and in the cloud. Bareos includes several plug-ins to support virtual infrastructures, application servers (like databases, such as PostgreSQL, MySQL, MSSQL, MariaDB, etc.), and LDAP directory services.

Here are the Bareos components, what they do, and how they work together:

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Bareos Director

This is the core component and the control center of Bareos, which manages the database (i.e., the Catalog), clients, file sets (defining the data in the backups), the plug-ins' configuration, backup jobs and schedules, storage and media pools, before and after jobs (programs to be executed before or after a backup/restore job), etc.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Catalog

The database maintains a record of all backup jobs, saved files, and backup volumes. Bareos uses PostgreSQL as the database backend.

File Daemon

The File Daemon (FD) runs on every client machine or the virtual layer to handle backup and restore operations. After the File Daemon has received the director's instructions, it executes them and then transmits the data to (or from) the Storage Daemon. Bareos offers client packages for various operating systems, including Windows, Linux, macOS, FreeBSD, Solaris, and other Unix-based systems on request.

Storage Daemon

This Storage Daemon (SD) receives data from one or more FDs and stores data on the configured backup medium. The SD runs on the machine handling the backup devices. Bareos supports backup media like hard disks and flash arrays, tapes and tape libraries, and S3-compatible cloud solutions. If there is a media changer involved, the SD controls that device as well. The SD sends the correct data back to the requesting File Daemon during the restore process. To increase flexibility, availability, and performance, there can be multiple SDs, for example, one per location.

Jobs and schedules

A backup job in Bareos describes what to back up (in a so-called FileSet directive on the client), when to back up (Schedule directive), and where to back up the data (Pool directive). This modular design lets you define multiple jobs and combine several directives, such as FileSets, Pools, and Schedules. Bareos allows you to have two different job resources managing various servers but using the same Schedule and FileSet, maybe even the same Pool.

The schedule not only sets the backup type (full, incremental, or differential) but also describes when a job is supposed to run, i.e., on different days of the week or month. Because of that, you can plan a detailed schedule and run full backups every Monday, incremental backups the rest of the week, etc. If more than one backup job uses the same schedule, you can set the job priority and thus tell Bareos which job is supposed to run first.

Encrypted communication

As mentioned, all Bareos services and applications communicate with each other over the network. Bareos provides TLS/SSL with pre-shared keys or certificates to ensure encrypted data transport. On top of that, Bareos can encrypt and sign data on the File Daemons before sending the backups to the Storage Daemon. Encryption and signing on the clients are implemented using RSA private keys combined with X.509 certificates (Public Key Infrastructure). Before the restore process, Bareos validates file signatures and reports any mismatches. Neither the Director nor the Storage Daemon has access to unencrypted content.

As a Bareos administrator, you can communicate with the backup software using a command-line interface (bconsole) or your preferred web browser (Bareos WebUI). The multilingual web interface manages multiple Bareos Directors and their databases. Also, it's possible to configure role-based access and create different profiles with ACLs (Access Control Lists) to control what a user can see and execute in the WebUI.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

The WebUI provides an overview and detailed information about backup jobs, clients, file sets, pools, volumes, and more. It's also possible to start backup and restore jobs via the web interface. Starting with Bareos 21, the WebUI provides a timeline to display selected jobs. This timeline makes it easy to spot running, finished, or even failed jobs. This is a great feature, especially in larger environments, as it lets you detect gaps in the schedule or identify which backup jobs are taking up the most time.

Packages, support, and training

There are no license fees for using Bareos. In addition to the Bareos source code, which is available on GitHub, the vendor provides Bareos packages in two different repositories:

  • The community repository contains packages for all major releases (without support).
  • The subscription repository also offers packages for minor releases with updates, bug fixes, etc., for customers with a Bareos subscription.

Customers with a valid subscription can also buy support and consulting from the manufacturer or sponsor the development of new features. Bareos GmbH & Co. KG has a global partner network, offering support and training in multiple languages.

Join the Bareos community

Bareos is a very active open source project with a great community. The source code of the software and the Bareos manual sources are hosted on GitHub, and everyone is welcome to contribute. Bareos also offers two mailing lists, one for users (bareos-users) and one for developers (bareos-devel). For news and announcements, technical guides, quick howtos, and more, you can also follow the Bareos blog.

Bareos preserves, archives, and recovers data from all major operating systems. Discover how its modular design and key features support flexibility, availability, and performance.

Image by:

Opensource.com

Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 reasons to use sudo on Linux

Thu, 05/12/2022 - 15:00
5 reasons to use sudo on Linux Seth Kenlon Thu, 05/12/2022 - 03:00 1 reader likes this 1 reader likes this

On traditional Unix and Unix-like systems, the first and only user that exists on a fresh install is named root. Using the root account, you log in and create secondary "normal" users. After that initial interaction, you're expected to log in as a normal user.

Running your system as a normal user is a self-imposed limitation that protects you from silly mistakes. As a normal user, you can't, for instance, delete the configuration file that defines your network interfaces or accidentally overwrite your list of users and groups. You can't make those mistakes because, as a normal user, you don't have permission to access those important files. Of course, as the literal owner of a system, you could always use the su command to become the superuser (root) and do whatever you want, but for everyday tasks you're meant to use your normal account.

Using su worked well enough for a few decades, but then the sudo command came along.

To a longtime superuser, the sudo command might seem superfluous at first. In some ways, it feels very much like the su command. For instance, here's the su command in action:

$ su root
<enter passphrase>
# dnf install -y cowsay

And here's sudo doing the same thing:

$ sudo dnf install -y cowsay
<enter passphrase>

The two interactions are nearly identical. Yet most distributions recommend using sudo instead of su, and most major distributions have eliminated the root account altogether. Is it a conspiracy to dumb down Linux?

Far from it, actually. In fact, sudo makes Linux more flexible and configurable than ever, with no loss of features and several significant benefits.

[ Download the cheat sheet: Linux sudo command ]

Why sudo is better than root on Linux

Here are five reasons you should be using sudo instead of su.

1. Root is a confirmed attack vector

I use the usual mix of firewalls, fail2ban, and SSH keys to prevent unwanted entry to the servers I run. Before I understood the value of sudo, I used to look through logs with horror at all the failed brute force attacks directed at my server. Automated attempts to log in as root are easily the most common, and with good reason.

An attacker with enough knowledge to attempt a break-in also would also know that, before the widespread use of sudo, essentially every Unix and Linux system had a root account. That's one less guess about how to get into your server an attacker has to make. The login name is always right, as long as it's root, so all an attacker needs is a valid passphrase.

Removing the root account offers a good amount of protection. Without root, a server has no confirmed login accounts. An attacker must guess at possible login names. In addition, the attacker must guess a password to associate with a login name. That's not just one guess and then another guess; it's two guesses that must be correct concurrently.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 2. Root is the ultimate attack vector

Another reason root is a popular name in failed access logs is that it's the most powerful user possible. If you're going to set up a script to brute force its way into somebody else's server, why waste time trying to get in as a regular user with limited access to the machine? It only makes sense to go for the most powerful user available.

By being both the singularly known user name and the most powerful user account, root essentially makes it pointless to try to brute force anything else.

3. Selective permission

The su command is all or nothing. If you have the password for su root, you can become the superuser. If you don't have the password for su, you have no administrative privileges whatsoever. The problem with this model is that a sysadmin has to choose between handing over the master key to their system or withholding the key and all control of the system. That's not always what you want. Sometimes you want to delegate.

For example, say you want to grant a user permission to run a specific application that usually requires root permissions, but you don't want to give this user the root password. By editing the sudo configuration, you can allow a specific user, or any number of users belonging to a specific Unix group, to run a specific command. The sudo command requires a user's existing password, not your password, and certainly not the root password.

4. Time out

When running a command with sudo, an authenticated user's privileges are escalated for 5 minutes. During that time, they can run the command or commands you've given them permission to run.

After 5 minutes, the authentication cache is cleared, and the next use of sudo prompts for a password again. Timing out prevents a user from accidentally performing that action later (for instance, a careless search through your shell history or a few too many Up arrow presses). It also ensures that another user can't run the commands if the first user walks away from their desk without locking their computer screen.

5. Logging

The shell history feature serves as a log of what a user has been doing. Should you ever need to understand how something on your system happened, you could (in theory, depending on how shell history is configured) use su to switch to somebody else's account, review their shell history, and maybe get an idea of what commands a user has been executing.

If you need to audit the behavior of 10s or 100s of users, however, you might notice that this method doesn't scale. Shell histories also rotate out pretty quickly, with a default age of 1,000 lines, and they're easily circumvented by prefacing any command with an empty space.

When you need logs on administrative tasks, sudo offers a complete logging and alerting subsystem, so you can review activity from a centralized location and even get an alert when something significant happens.

Learn the features

The sudo command has even more features, both current and in development, than what I've listed in this article. Because sudo is often something you configure once then forget about, or something you configure only when a new admin joins your team, it can be hard to remember its nuances.

Download our sudo cheat sheet and use it as a helpful reminder for all of its uses when you need it the most.

Here are five security reasons to switch to the Linux sudo command. Download our sudo cheat sheet for more tips.

Image by:

Opensource.com

Linux Sysadmin Cheat sheets What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 surprising things I do with Linux

Wed, 05/11/2022 - 15:00
5 surprising things I do with Linux Seth Kenlon Wed, 05/11/2022 - 03:00 1 reader likes this 1 reader likes this

When you're used to one operating system, it can be easy to look at other operating systems almost as if they were apps. If you use one OS on your desktop, you might think of another OS as the app that people use to run servers, and another OS as the app that plays games, and so on. We sometimes forget that an operating system is the part of a computer that manages a countless number of tasks (millions per second, technically), and they're usually designed to be capable of a diverse set of tasks. When people ask me what Linux does, I usually ask what they want it to do. There's no single answer, so here are five surprising things I do with Linux.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 1. Laser cutting with Linux Image by:

(MSRaynsford, CC BY-NC 4.0)

At my nearest makerspace, there's a big industrial machine, about the size of a sofa, that slices through all kinds of materials according to a simple line-drawing design file. It's a powerful laser cutter, and I the first time I used it I was surprised to find that it just connected to my Linux laptop with a USB cable. In fact, in many ways, it was easier to connect to this laser cutter than it is to connect with many desktop printers, many of which require over-complicated and bloated drivers.

Using Inkscape and a simple plugin, you can design cut lines for industrial laser cutters. Design a case for your Raspberry Pi laptop, use these Creative Commons design plans to build a cryptex lockbox, cut out a sign for your shopfront, or whatever it is you have in mind. And do it using an entirely open source stack.

2. Gaming on Linux Image by:

The Lutris desktop client

Open source has always had games, and there have been some high profile Linux games in the recent past. The first gaming PC I built was a Linux PC, and I don't think any of the people I had over for friendly couch co-op games realized they were using Linux by playing. And that's a good thing. It's a smooth and seamless experience, and the sky's the limit, depending on how much you want to spend on hardware.

What's more is that it's not just the games that have been coming to Linux, but the platform too. Valve's recent Steam Deck is a popular handheld gaming console that runs Linux. Better still, many open source software titles have been publishing releases on Steam, including Blender and Krita, as ways to encourage wider adoption.

3. Office work on Linux Image by:

Opensource.com

Linux, like life, isn't always necessarily exciting. Sometimes, you need a computer to do ordinary things, like when you pay bills, make a budget, or write a paper for school or a report for work. Regardless of the task, Linux is also normal, everyday desktop computer. You can use Linux for the mundane, the everyday, the "usual".

You're not limited to just the big name applications, either. I do my fair share of work in the excellent LibreOffice suite, but on my oldest computer I use the simpler Abiword instead. Sometimes, I like to explore Calligra, the KDE office suite, and when there's precision design work to be done (including specialized procedural design work), I use Scribus.

The greatest thing about using Linux for everyday tasks is that ultimately nobody knows what you used to get to the end product. Your tool chain and your workflow is yours, and the results are as good or better than what locked-down, non-open software produces. I have found that using Linux for the everyday tasks makes those tasks more fun for me, because open source software inherently permits me to develop my own path to my desired outcome. I try to create solutions that help me get work done efficiently, or that help me automate important tasks, but I also just enjoy the flexibility of the system. I don't want to adapt for my tool chain, I want to adapt my tools so that they work for me.

4. Music production on Linux Image by:

Opensource.com

I'm a hobbyist musician, and before I started doing all of my production on computers I owned several synthesizers and sequencers and multi-track recorders. One reason it took me as long as it did to switch to computer music was that it didn't feel modular enough for me. When you're used to wiring physical boxes to one another to route sound through filters and effects and mixers and auxiliary mixers, an all-in-one application looks a little underwhelming.

It's not that an all-in-one app isn't appreciated, by any means. I like being able to open up one application, like LMMS, that happens to have everything I want. However, in practice it seems that no music application I tried on a computer actually had everything I needed.

When I switched to Linux, I discovered a landscape built with modularity as one of its founding principles. I found applications that were just sequencers, applications that were just synthesizers, mixers, recorders, patch bays, and so on. I could build my own studio on my computer just as I'd built my own studio in real life. Audio production has developed in leaps and bounds on Linux, and today there are open source applications that can act as a unified control center while retaining the extensibility to pull in sounds from elsewhere on the system. For a patchwork producer like me, it's a dream studio.

5. Retro computing on Linux Image by:

Opensource.com

I don't like throwing away old computers, because very rarely do old computers actually die. Usually, an old computer is "outgrown" by the rest of the world. Operating systems get too bloated for an old computer to handle, so you stop getting OS and security updates, applications start to demand resources your old computer just doesn't have, and so on.

I tend to adopt old computers, putting them to work as either lab machines or home servers. Lately, I find that adding an SSD drive to serve as the root partition, and using XFCE or a similar lightweight desktop, makes even a computer from the previous decade a pleasantly usable machine for a lot more work than you might expect. Graphic design, web design, programming, stop-motion animation, and much more, are trivial tasks on low spec machines, to say nothing of simple office work. With Linux driving a machine, it's a wonder businesses ever upgrade.

Everybody has their favorite "rescue" distribution. Mine are Slackware and Mageia, both of which still release 32-bit installer images. Mageia is RPM-based, too, so you can use modern packaging tools like dnf and rpmbuild.

Bonus: Linux servers

OK, I admit Linux on servers isn't at all surprising. In fact, to people who know of Linux but don't use Linux themselves, a data center is usually the first thing that pops into their heads when "Linux" is mentioned. The problem with that assumption is that it can make it seem obvious that Linux ought to be great on the server, as if Linux doesn't even have to try. It's a flattering sentiment, but the fact is that Linux is great on servers because there's a monumental effort across global development teams to make Linux especially effective at what it does.

It isn't by chance that Linux is the robust operating system that powers most of the internet, most of the cloud, nearly all the supercomputers in existence, and more. Linux isn't stagnate, and while it has a rich history behind it, it's not so steeped in tradition that it fails to progress. New technologies are being developed all the time, and Linux is a part of that progress. Modern Linux adapts to growing demands from a changing world to make it possible for systems administrators to provide networked services to people all over the world.

It's not everything Linux can do, but it's no small feat, either.

[ Red Hat Enterprise Linux turns 20 this year: How enterprise Linux has evolved from server closet to cloud ]

Linux isn't that surprising

I remember the first time I met someone who'd grown up using Linux. It never seemed to happen for most of the time I've been a Linux user, but lately it's relatively common. I think the most surprising encounter was with a young woman, toddler in tow, who saw whatever geeky t-shirt I was wearing at the time and casually mentioned that she also used Linux, because she'd grown up with it. It actually made me a little jealous, but then I remembered that Unix on a desktop computer simply didn't exist when I was growing up. Still, it's fun to think about how casual Linux has become over the past few decades. It's even more fun to be a part of it.

Linux powers most of the internet, most of the cloud, and nearly all supercomputers. I also love to use Linux for gaming, office work, and my creative pursuits.

Image by:

Opensource.com

Linux What to read next 26 open source creative apps to try in 2022 11 surprising ways you use Linux every day 3 ways to play video games on Linux This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Manage your Gmail filters from the Linux command line

Wed, 05/11/2022 - 15:00
Manage your Gmail filters from the Linux command line Kevin Sonney Wed, 05/11/2022 - 03:00 2 readers like this 2 readers like this

Automation is a hot topic right now. In my day job as an SRE part of my remit is to automate as many repeating tasks as possible. But how many of us do that in our daily, not-work, lives? This year, I am focused on automating away the toil so that we can focus on the things that are important.

Server-side mail rules are one of the most efficient ways to pre-sort and filter mail. Sadly, Gmail, the most popular mail service in the world, doesn't use any of the standard protocols to allow users to manage their rules. Adding, editing, or removing a single rule can be a time-consuming task in the web interface, depending on how many rules the user has in place. The options for editing them "out of band" as provided by the company are limited to an XML export and import.

I have 109 mail filters, so I know what a chore it can be to manage them using the provided methods. At least until I discovered gmailctl, the command-line tool for managing Gmail filters with a (relatively) simple standards-based configuration file.

$ gmailctl test
$ gmailctl diff
Filters:
--- Current
+++ TO BE APPLIED
@@ -1 +1,6 @@
+* Criteria:
+ from: @opensource.com
+ Actions:
+ mark as important
+ never mark as spam

$ gmailctl apply
You are going to apply the following changes to your settings:
Filters:
--- Current
+++ TO BE APPLIED
@@ -1 +1,6 @@
+* Criteria:
+ from: @opensource.com
+ Actions:
+ mark as important
+ never mark as spam
Do you want to apply them? [y/N]:

To define rules in a flexible manner gmailctl uses the jsonnet templating language. Using gmailctl also allows the user to export the existing rules for modification.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

To get started, install gmailctl via your system's package manager, or install from source with go install github.com/mbrt/gmailctl/cmd/gmailctl@latest. Follow that with gmailctl init which will walk you through the process of setting up your credentials and the correct permissions in Google. If you already have rules in Gmail, I recommend running gmailctl download next, in order to backup the existing rules. These will be saved in the default configuration file ~/.gmailctl/config.jsonnet. Copy that file somewhere safe for future reference, or to restore your old rules just in case!

If you wish to start from a clean slate, or you don't have any rules yet, you need to create a new, empty ~/.gmailctl/config.jsonnet file. The most basic structure for this file is:

local lib = import 'gmailctl.libsonnet';
{
  version: "v1alpha3",
  author: {
    name: "OSDC User",
    email: "your-email@gmail.com"
  },
  rules: [
    {
      filter: {
        or: [
          { from: "@opensource.com" },
        ]
      },
      actions: {
        markRead: false,
        markSpam: false,
        markImportant: true
      },
    },
  ]
}

As you can see, this file format is similar to, but not as strict as JSON. This file sets up a simple rule to mark any mail from opensource.com as important, leave it unread, and not mark it as spam. It does this by defining the criteria in the filters section, and then the rules to apply in the actions section. Actions include the following boolean commands: markRead, markSpam,markImportant, and archive. You can also use actions to specify a category for the mail, and assign folders, which we will get to later in the article.

Once the file is saved, the configuration file format can be verified with gmailctl test. If everything is good, then you can use gmailctl diff to view what changes are going to be made, and gmailctl apply to upload your new rule to Gmail.

$ gmailctl diff
Filters:
---
Current
+++ TO BE APPLIED
@@ -1,6 +1,8 @@
* Criteria:
from: @opensource.com Actions:
+ archive
  mark as important
  never mark as spam
+ apply label: 1-Projects/2022-OSDC

$ gmailctl apply -y
You are going to apply the following changes to your settings:
Filters:
--- Current
+++ TO BE APPLIED
@@ -1,6 +1,8 @@
* Criteria:
  from: @opensource.com Actions:
+ archive
  mark as important
  never mark as spam
  apply label: 1-Projects/2022-OSDC

Applying the changes...

As mentioned previously, new mail messages can be auto-filed by setting labels in the configuration. I want to assign all mails from Opensource.com to a folder specifically for them, and remove them from the inbox (or archive in Gmail terms). To do that, I would change the actions section to be:

  actions: {
        markRead: false,
        markSpam: false,
        markImportant: true,
        archive: true,
        labels: [
          "1-Projects/2022-OSDC"
        ]
      },

As you can see in the image above, gmailctl diff now shows only what is going to change. To apply it, I used gmailctl apply -y to skip the confirmation prompt. If the label doesn't exist, then an error is given, since a filter cannot be made for a label that does not already exist.

You can also make more complex rules that target specific conditions or multiple emails. For example, the following rule uses an and condition to look for messages from Cloudflare that are not purchase confirmations.

filter: { and: [ { from: "noreply@notify.cloudflare.com" }, { subject: "[cloudflare]" }, { query: "-{Purchase Confirmation}" } ] },

In the case of a rule that performs the same action on multiple messages, you can use an or structure. I use that to file all emails relating to tabletop games to a single folder.

filter: { or: [ { from: "no-reply@obsidianportal.com" }, { from: "no-reply@roll20.net" }, { from: "team@arcanegoods.com" }, { from: "team@dndbeyond.com" }, { from: "noreply@forge-vtt.com" }, { from: "@elventower.com" }, { from: "no-reply@dmsguild.com"}, { from: "info@goodman-games.com" }, { from: "contact@mg.ndhobbies.com" }, { from: "@monkeyblooddesign.co.uk" }, ] },

For people with multiple Gmail accounts that need their own sets of rules, you can specify a unique configuration file for them with the --config command line parameter. For example, my work uses Gmail, and I have a whole other set of rules for that. I can create a new gmailctl directory, and use that for the work configuration, like so:

$ gmailctl --config ~/.gmailctl-work/ diff

To make this easier on myself, I have two shell aliases to make it clear which configuration I'm using.

alias gmailctl-home="gmailctl --config $HOME/.gmailctl" alias gmailctl-work="gmailctl --config $HOME/.gmailctl-work"

The one drawback gmailctl has is that it will not apply a new filter to existing messages, so you still have to manually do things for mail received before doing gmailctl apply. I hope they are able to sort that out in the future. Other than that, gmailctl has allowed me to make adding and updating Gmail filters fast and almost completely automatic, and I can use my favorite email client without having to constantly go back to the web UI to change or update a filter.

The gmailctl command-line tool manages email filters with a simple standards-based configuration file.

Image by:

Ribkahn via Pixabay, CCO

Email Command line Linux What to read next 5 open source alternatives to Gmail This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

6 easy ways to make your first open source contribution with LibreOffice

Tue, 05/10/2022 - 15:00
6 easy ways to make your first open source contribution with LibreOffice Klaatu Tue, 05/10/2022 - 03:00 Register or Login to like Register or Login to like

"Getting involved" with open source can seem a little confusing. Where do you go to get started? What if you don't know how to code? Who do you talk to? How does anybody know that you have contributed, and besides that does anybody care?

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

There are actually answers to questions like those (your choice, it's OK, nobody, you tell them, yes) but during the month of May 2022, there's one simple answer: LibreOffice. This month is a month of participation at LibreOffice and its governing body, The Document Foundation. They're inviting contributors of all sorts to help in any of six different ways, and only one of those has anything at all to do with code. No matter what your skill, you can probably find a way to help the world's greatest office suite.

6 ways to contribute to LibreOffice

Here's what you can do:

  • Handy Helper: Go answer questions from other LibreOffice users on Ask LibreOffice. If you're an avid user of LibreOffice and think you have useful tips and tricks that will help others, this is the role you've been waiting for.
  • First Responder: Bug reports are better when they're confirmed by more than just one user. If you're good at installing software (sometimes bug reports are for older versions than what you might be using normally) then go to the LibreOffice Bugzilla and find new bugs that have yet to be confirmed. When you find one, try to replicate what's been reported. Assuming you can do that, add a comment like “CONFIRMED on Linux (Fedora 35) and LibreOffice 7.3.2”.
  • Drum Beater: Open source projects rarely have big companies funneling marketing money into promoting them. It would be nice if all the companies claiming to love open source would help out, but not all of them do, so why not lend your voice? Get on social media and tell your friends why you love LibreOffice, or what you’re using it for (and of course add the #libreoffice hashtag.)
  • Globetrotter: LibreOffice is already available in many different languages, but not literally all languages. And LibreOffice is actively being developed, so its interface translations need to be kept up-to-date. Get involved here.
  • Docs Doctor: LibreOffice has online help as well as user handbooks. If you're great at explaining things to other people, or if you're great at proof-reading other people's documentation, then you should contact the docs team.
  • Code Cruncher: You're probably not going to dive into LibreOffice's code base and make major changes right away, but that's not generally what projects need. If you know how to code, then you can join the developer community by following the instructions on this wiki page.
Free stickers

I didn't want to mention this up-front because obviously you should get involved with LibreOffice just because you're excited to get involved with a great open source project. However, you're going to find out eventually so I may as well tell you: By contributing to LibreOffice, you can sign up to get free stickers from The Document Foundation. Surely you've been meaning to decorate your laptop?

Don't get distracted by the promise of loot, though. If you're confused but excited to get involved with open source, this is a great opportunity to do so. And it is representative of how you get involved with open source in general: You look for something that needs to be done, you do it, and then you talk about it with others so you can get ideas for what you can do next. Do that often enough, and you find your way into a community. Eventually, you stop wondering how to get involved with open source, because you're too busy contributing!

May 2022 is LibreOffice month. Here are some easy ways to make your first open source contribution.

Image by:

Photo by Rob Tiller, CC BY-SA 4.0

LibreOffice Getting started What to read next 6 ways to contribute to an open source alternative to Slack 5 ways LibreOffice supports accessibility My favorite LibreOffice productivity tips 5 surprising things you can do with LibreOffice from the command line This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My open source journey with C from a neurodiverse perspective

Tue, 05/10/2022 - 15:00
My open source journey with C from a neurodiverse perspective Rikard Grossma… Tue, 05/10/2022 - 03:00 Register or Login to like Register or Login to like

I was born in 1982, which in human years is only 40 years in the past (at the time of writing). In terms of computer development, it's eons ago. I got my first computer, a Commodore 64, when I was ten years old. Later, I got an Amiga, and by 13 I got an "IBM Compatible" (that's what they were called, then) PC.

In high school, I did a lot basic programming on my graphing calculator. In my second year of high school, I learned basic C programming, and in my third year I started doing more advanced C programming, using libraries, pointers, and graphics.

My journey from programming student to teacher

In my college days, I learned Java and so Java became my primary language. I also made some C# programs for a device known as a personal data assistant (PDA), which were pre-cursors to the modern smart phone. Because Java is object-oriented, multi-platform, and made GUI programming easy, I thought I'd do most of my programming in Java from now on.

In college, I also discovered that I had a talent for teaching, so I helped others with programming, and they helped me with math when I took computer science. I took some courses on C programming, aimed at basic embedded programming and controlling measurement instruments in my later college years.

After turning 30, I've used C as a teaching tool for high school kids learning to program in C. I've also used Fritzing to teach high school kids how to program an Arduino. My interest in C programming was awakened again last year, when I got a job helping college students with learning differences in computing subjects.

How I approach programming in C and other languages

All people learn differently. Being a neurodiverse person with Asperger's and ADHD, my learning process is sometimes quite different from others. Of course, everyone has different learning styles, though people who are neurodiverse might have a greater preference for a certain learning style than someone else.

I tend to think in both pictures and words. Personally I need to decode things step by step, and understand them, step by step. This makes C a suitable language for my learning style. When I learn code, I gradually incorporate the code into my mind by learning to see lines of code, like #include in front of me. From what I've read from descriptions of other neurodiverse people on the internet, some of them seem to have this kind of learning style as well. We “internalize code”.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

Some autistic people are a lot better at memorizing large chunks of code than me, but the process seems to be the same. When understanding concepts such as structs, pointers, pointers to pointers, matrices, and vectors, it's helpful for me to think in pictures, such as the ones you find in programming tutorials and books.

I like to use C to understand how things are done at a lower level, such as file input and output (I/O), networking programming, and so on. This doesn't mean I don't like libraries that handle tasks such as string manipulation or making arrays. I also like the ease of creating arrays and vectors in Java. However, for creating a user interface, though I have looked at such code in C, I prefer to use grapical editors, such as Netbeans and similar.

My ideal C GUI open source tool for creating applications

If I imagine an ideal open source tool for creating a GUI using C, it would be something similar to Netbeans that, for example, making GTK-interfaces by dragging and dropping. It should also be possible to put C on buttons, and so on, to make them perform actions. There may be such a tool. I admittedly haven't looked around that much.

Why I encourage young neurodiverse people to learn C

Gaming is a big industry. Some studies suggest neurodiverse kids may be even more focused on gaming than other kids. I would tell a neurodiverse high school or college kid that If you learn C, you may be able to learn the basics of, for example, writing efficient drivers for a graphics card, or to make efficient file I/O routines to optimize their favorite game. I would also be honest that it takes time and effort to learn, but that it's worth the effort. Once you learn it, you have greater control of things like hardware.

For learning C, I recommend a neurodiverse kid to install a beginner-friendly Linux distro, and then find some tutorials on the net. I also recommend breaking down things step by step, and drawing diagrams of, for example, pointers. I did that to better understand the concept, and it worked for me.

In the end, that's what it's about: Find a learning method that works for you, no matter what teachers and other students may say, and use it to learn the open source skill that interests you. It can be done, and anyone can do it.

I've learned that if you can find the method that works for you, no matter what teachers and other students may say, you can learn any open source skill that interests you.

Image by:

opensource.com

Programming Diversity and inclusion Accessibility What to read next Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to (safely) read user input with the getline function

Tue, 05/10/2022 - 15:00
How to (safely) read user input with the getline function Jim Hall Tue, 05/10/2022 - 03:00 Register or Login to like Register or Login to like

Reading strings in C used to be a very dangerous thing to do. When reading input from the user, programmers might be tempted to use the gets function from the C Standard Library. The usage for gets is simple enough:

char *gets(char *string);

That is, gets reads data from standard input, and stores the result in a string variable. Using gets returns a pointer to the string, or the value NULL if nothing was read.

As a simple example, we might ask the user a question and read the result into a string:

#include
#include

int
main()
{
  char city[10];                       // Such as "Chicago"

  // this is bad .. please don't use gets

  puts("Where do you live?");
  gets(city);

  printf("<%s> is length %ld\n", city, strlen(city));

  return 0;
}

Entering a relatively short value with the above program works well enough:

Where do you live?
Chicago
<Chicago> is length 7

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

However, the gets function is very simple, and will naively read data until it thinks the user is finished. But gets doesn't check that the string is long enough to hold the user's input. Entering a very long value will cause gets to store more data than the string variable can hold, resulting in overwriting other parts of memory.

Where do you live?
Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch
<Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch> is length 58
Segmentation fault (core dumped)

At best, overwriting parts of memory simply breaks the program. At worst, this introduces a critical security bug where a bad user can insert arbitrary data into the computer's memory via your program.

That's why the gets function is dangerous to use in a program. Using gets, you have no control over how much data your program attempts to read from the user. This often leads to buffer overflow.

The safer way

The fgets function has historically been the recommended way to read strings safely. This version of gets provides a safety check by only reading up to a certain number of characters, passed as a function argument:

char *fgets(char *string, int size, FILE *stream);

The fgets function reads from the file pointer, and stores data into a string variable, but only up to the length indicated by size. We can test this by updating our sample program to use fgets instead of gets:

#include

#include

int

main()

{

char city[10]; // Such as “Chicago”

// fgets is better but not perfect

puts(“Where do you live?”);

fgets(city, 10, stdin);

printf("<%s> is length %ld\n", city, strlen(city));

return 0;

}

If you compile and run this program, you can enter an arbitrarily long city name at the prompt. However, the program will only read enough data to fit into a string variable of size=10. And because C adds a null (‘\0') character to the ends of strings, that meansfgets will only read 9 characters into the string:

Where do you live?
Minneapolis
<Minneapol> is length 9

While this is certainly safer than using fgets to read user input, it does so at the cost of "cutting off" your user's input if it is too long.

The new safe way

A more flexible solution to reading long data is to allow the string-reading function to allocate more memory to the string, if the user entered more data than the variable might hold. By resizing the string variable as necessary, the program always has enough room to store the user's input.

The getline function does exactly that. This function reads input from an input stream, such as the keyboard or a file, and stores the data in a string variable. But unlike fgets and gets, getline resizes the string with realloc to ensure there is enough memory to store the complete input.

ssize_t getline(char **pstring, size_t *size, FILE *stream);

The getline is actually a wrapper to a similar function called getdelim that reads data up to a special delimiter character. In this case, getline uses a newline ('\n') as the delimiter, because when reading user input either from the keyboard or from a file, lines of data are separated by a newline character.

The result is a much safer method to read arbitrary data, one line at a time. To use getline, define a string pointer and set it to NULL to indicate no memory has been set aside yet. Also define a "string size" variable of type size_t and give it a zero value. When you call getline, you'll use pointers to both the string and the string size variables, and indicate where to read data. For a sample program, we can read from the standard input:

#include
#include
#include

int
main()
{
  char *string = NULL;
  size_t size = 0;
  ssize_t chars_read;

  // read a long string with getline

  puts("Enter a really long string:");

  chars_read = getline(&string, &size, stdin);
  printf("getline returned %ld\n", chars_read);

  // check for errors

  if (chars_read < 0) {
    puts("couldn't read the input");
    free(string);
    return 1;
  }

  // print the string

  printf("<%s> is length %ld\n", string, strlen(string));

  // free the memory used by string

  free(string);

  return 0;
}

As the getline reads data, it will automatically reallocate more memory for the string variable as needed. When the function has read all the data from one line, it updates the size of the string via the pointer, and returns the number of characters read, including the delimiter.


Enter a really long string:
Supercalifragilisticexpialidocious
getline returned 35
<Supercalifragilisticexpialidocious
> is length 35


Note that the string includes the delimiter character. For getline, the delimiter is the newline, which is why the output has a line feed in there. If you don't want the delimiter in your string value, you can use another function to change the delimiter to a null character in the string.

With getline, programmers can safely avoid one of the common pitfalls of C programming. You can never tell what data your user might try to enter, which is why using gets is unsafe, and fgets is awkward. Instead, getline offers a more flexible way to read user data into your program without breaking the system.

Getline offers a more flexible way to read user data into your program without breaking the system.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Cloud service providers: How to keep your options open

Mon, 05/09/2022 - 15:00
Cloud service providers: How to keep your options open Seth Kenlon Mon, 05/09/2022 - 03:00 Register or Login to like Register or Login to like

For Linux users, there's a new kind of computer on the market, and it's known as the cloud.

As with the PC sitting on your desk, the laptop in your backpack, and the virtual private server you rent from your favorite web hosting service, you have your choice in vendors for cloud computing. The brand names are different than the hardware brands you've known over the years, but the concept is the same.

To run Linux, you need a computer. To run Linux on the cloud, you need a cloud service provider. And just like the hardware and firmware that ships with your computer, there's a spectrum for how open source your computing stack can be.

As a user of open source, I prefer my computing stack to be as open as possible. After a careful survey of the cloud computing market, I've developed a three-tier view of cloud service providers. Using this system as your guide, you can make intelligent choices about what cloud provider you choose.

Open stack

A cloud that's fully open is a cloud built on open source technology from the ground up. So much cloud technology is open source, and has been from the beginning, that an open stack isn't all that difficult to accomplish, at least on the technical level. However, there are cloud providers reinventing the wheel in a proprietary way, which makes it easy to stumble into a cloud provider that's mixed a lot of closed source components in with the usual open source tooling.

If you're looking for a truly open cloud, look for a cloud provider providing OpenStack as its foundation. OpenStack provides the software infrastructure for clouds, including Software-Defined Networking (SDN) through Neutron, object storage through Swift, identity and key management, image services, and much more. Keeping with my hardware computer analogy, OpenStack is the "kernel" that powers the cloud.

I don't mean that literally, of course, but if your cloud provider runs OpenStack, that's reasonably as far down in the stack as you can go. From a user perspective, OpenStack is the reason your cloud exists and has a filesystem, network, and so on.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects

Sitting on top of OpenStack, there may be a web UI such as Horizon or Skyline, and there may be extra components such as OpenShift or OKD (not an acronym, but formerly known as OpenShift Origin). All of these are open source, and they help you run containers, which are minimalist Linux images with applications embedded within them.

Because OpenShift and OKD don't require OpenStack, that's the next tier of my cloud-based world view.

[ Download the guide: Containers and Pods 101 ]

Open platform

You don't always have a choice in which stack your cloud is running. Instead of OpenStack, your cloud might be running Azure, Amazon Web Services (AWS), or something similar.

Those are the "binary blobs" of the cloud world. You have no insight into how or why they work; all you know is that your cloud exists and has a filesystem, a networking stack, and so on.

Just as with desktop computing, you can have an "operating system" running on the box you've been given. Again, I'm not speaking literally, and there's a strong argument that OpenStack itself is essentially an operating system for the cloud. Still, it's usually OpenShift that a cloud user interacts with directly.

OpenShift is an open source "desktop" or workspace in which you can manage containers and pods with Podman and Kubernetes. It lets you run applications on the cloud much as you might launch an app on your laptop.

[ Keep these commands handy: Podman cheat sheet ]

Open standards

Last but not least, there are those situations when you have no choice in cloud service providers. You're put on a platform with a proprietary "kernel," a proprietary "operating system," and all that's left for you to influence is what you run inside that environment.

All is not lost.

When you're dealing with open source, you have the ability to construct your own scaffolding. You can choose what components you use inside your containers. You can and should design your working environment around open source tools, because if you do get to change service providers, you can take everything you've built with you.

This might mean implementing something already built into the (non-open) platform you're stuck on. For instance, your cloud provider might entice you with an API management system or continuous integration/continuous delivery (CI/CD) pipeline that's included in their platform "for free," but you know better. When a non-open application is offered as "free," it usually bears a cost in some other form. One cost is that once you start building on top of it, you'll be all the more hesitant to migrate away because you know that you'll have to leave behind everything you built.

Instead of using the closed "features" of your cloud provider, reimplement those services as open source for your own use. Run Jenkins and APIMan in containers. Find the problems your cloud provider claims to solve with proprietary code, then use an open source solution to ensure that, when you leave for an open provider, you can migrate the system you've built.

[ Take the free online course: Deploying containerized applications ]

Open source computing

For too many people, cloud computing is a place where open source is incidental. In reality, open source is as important on the cloud as it is on your personal computer and the servers powering the internet.

Look for open source cloud services.

When you're stuck with something that doesn't provide source code, be the one using open source in your cloud.

No matter what level of openness your cloud service operates on, you have choices for your own environment.

Image by:

Flickr user: theaucitron (CC BY-SA 2.0)

Cloud Containers Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How open source leads the way for sustainable technology

Sun, 05/08/2022 - 15:00
How open source leads the way for sustainable technology Hannah Smith Sun, 05/08/2022 - 03:00 Register or Login to like Register or Login to like

There's a palpable change in the air regarding sustainability and environmental issues. Concern for the condition of the planet and efforts to do something about it have gone mainstream. To take one example, look at climate-based venture capitalism. The Climate Tech Venture Capital (CTVC) Climate Capital List has more than doubled in the past two years. The amount of capital pouring in demonstrates a desire and a willingness to solve hard climate challenges.

It's great that people want to take action, and I'm here for it! But I also see a real risk: As people rush to take action or jump on the bandwagon, they may unwittingly participate in greenwashing.

The Wikipedia definition of greenwashing calls it "a form of marketing spin in which green PR and green marketing are deceptively used to persuade the public that an organization's products, aims, and policies are environmentally friendly." In my view, greenwashing happens both intentionally and accidentally. There are a lot of good people out there who want to make a difference but don't yet know much about complex environmental systems or the depth of issues around sustainability.

It's easy to fall into the trap of thinking a simple purchase like offsetting travel or datacenter emissions by planting trees will make something greener. While these efforts are welcome, and planting trees is a viable solution to improving sustainability, they are only a good first step—a scratch on the surface of what needs to happen to make a real difference.

So what can a person, or a community, do to make digital technology genuinely more sustainable?

Sustainability has different meanings to different people. The shortest definition that I like is from the 1987 Bruntland Report, which summarizes it as "meeting the needs of the present without compromising the ability of future generations to meet their needs." Sustainability at its core is prioritizing long-term thinking.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Sustainability is more than environmental preservation

There are three key interconnected pillars in the definition of sustainability:

  1. Environmental
  2. Economic / governance
  3. Social

Conversations about sustainability are increasingly dominated by the climate crisis—for good reason. The need to reduce the amount of carbon emissions emitted by the richer countries in the world becomes increasingly urgent as we continue to pass irreversible ecological tipping points. But true sustainability is a much more comprehensive set of considerations, as demonstrated by the three pillars.

Carbon emissions are most certainly a part of sustainability. Many people consider emissions only an environmental issue: Just take more carbon out of the air, and everything will be ok. But social issues are just as much a part of sustainability. Who is affected by these carbon emissions? Who stands to bear the greatest impact from changes to our climate? Who has lost their land due to rising sea levels or a reliable water source due to changing weather patterns? That's why you might have heard the phrase "climate justice is social justice."

Thinking only about decarbonization as sustainability can give you carbon tunnel vision. I often think that climate change is a symptom of society getting sustainability wrong on a wider scale. Instead, it is critical to address the root causes that brought about climate change in the first place. Tackling these will make it possible to fix the problems in the long term, while a short-term fix may only push the issue onto another vulnerable community.

The root causes are complex. But if I follow them back to their source, I see that the root causes are driven by dominant Western values and the systems designed to perpetuate those values. And what are those values? For the most part, they are short-term growth and the extraction of profit above all else.

That is why conversations about sustainability that don't include social issues or how economies are designed won't reach true solutions. After all, societies, and the people in positions of power, determine what their own values are—or aren't.

What can you or I do?

Many in the tech sector are currently grappling with these issues and want to know how to take meaningful action. One common approach is looking at how to optimize the tech they build so that it uses electricity more effectively. Sixty percent of the world's electricity is still generated by burning fossil fuels, despite the increasing capacity for renewable energy generation. Logically, using less electricity means generating fewer carbon emissions.

And yes, that is a meaningful action that anyone can take right now, today. Optimizing the assets sent when someone loads a page to send less data will use less energy. So will optimizing servers to run at different times of the day, for example when there are more renewables online, or deleting old stores of redundant information, such as analytics data or logs.

But consider Jevon's paradox: Making something more efficient often leads to using more of it, not less. When it is easier and more accessible for people to use something, they end up consuming more. In some ways, that is good. Better performing tech is a good thing that helps increase inclusion and accessibility, and that's good for society. But long-term solutions for climate change and sustainability require deeper, more uncomfortable conversations around the relationship between society and technology. What and who is all this technology serving? What behaviors and practices is it accelerating?

It's common to view advancing technology as progress, and some people repeat the mantra that technology will save the world from climate change. A few bright folks will do the hard work, so no one else has to change their ways. The problem is that many communities and ecosystems are already suffering.

For example, the accelerating quest for more data is causing some communities in Chile to have insufficient water to grow their crops. Instead, datacenters are using it. Seventy percent of the pollution caused by mobile phones comes from their manufacture. The raw resources such as lithium and cobalt to make and power mobile devices are usually extracted from a community that has little power to stop the destruction of their land and that certainly does not partake in the profit made. Still, the practice of upgrading your phone every two years has become commonplace.

Open source leading the way for sustainability

It's time to view the use of digital technology as a precious resource with consequences to both the planet and (often already disadvantaged) communities.

The open source community is already a leading light in helping people to realize there is another way: the open source way. There are huge parallels between the open source way and what our wider society needs to do to achieve a more sustainable future. Being more open and inclusive is a key part of that.

We also need a mindset shift at all levels of society that views digital technology as having growth limits and not as the abundantly cheap and free thing we see today. We need to wisely prioritize its application in society to the things that matter. And above all else, we need to visualize and eradicate the harms from its creation and continued use and share the wealth that is does create equitably with everyone in society, whether they are users of digital tech or not. These things aren’t going to happen overnight, but they are things we can come together to push towards so that we all enjoy the benefits of digital technology for the long-term, sustainably.

This article is based on a longer presentation. To see the talk in full or view the slides, see the post "How can we make digital technology more sustainable."

There are huge parallels between the open source way and what our wider society needs to do to achieve a more sustainable future.

Image by:

opensource.com

Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Build community engagement by serving up Lean Coffee

Sat, 05/07/2022 - 15:00
Build community engagement by serving up Lean Coffee Angie Byron Sat, 05/07/2022 - 03:00 Register or Login to like Register or Login to like

I recently started a new job at MongoDB as a Principal Community Manager, spearheading the MongoDB Community Champions program. In that role, I faced two challenges.

First, I was joining a brand new, fully remote team. Not only was I new myself, but the team as a whole was just beginning to form, with new members coming on board a couple of times per month. This team was also spread across several time zones, with about half of them older, established members who've been with the company for a long time and know each other pretty well, and the other half entirely new faces.

Second, the Community Champions program started during the pandemic. As a result, program participants from around the world had very little opportunity to meet each other and meld as a group. I wanted to find out more about what they wanted to discuss and learn, so I could use that to plan out the first few months of programming. I also wanted to give them a chance to talk with each other about their interests.

I ran these scenarios past a friend of mine, the fabulous Donna Benjamin, and she suggested an extremely useful tool from the Open Practice Library: Lean Coffee.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources What is Lean Coffee?

Lean Coffee is a structured but agenda-less meeting. Participants gather, build their own agenda, and begin talking.

You start with a Kanban board with To Discuss, Discussing, and Discussed columns. Optionally, you can add an Actions section to write down anything that needs following up after the meeting. Participants get a fixed amount of time to brainstorm topics to talk about. Each topic is then written on a sticky note.

All sticky notes are placed in the To Discuss column and clustered by grouping similar topics together, also known as affinity mapping. This is followed by a round of dot voting, which is exactly what it sounds like: voting by placing a dot on a sticky note to indicate your choice. At the end of the process, you might have a board that looks something like this:

Image by:

(Angie Byron, CC BY-SA 4.0)

The most popular topic moves to the Discussing column, then you set a timer for 5-7 minutes and start the group talking. When the timer goes off, everyone votes again on whether to keep going for another few minutes or switch to the next topic. Stickies are moved and topics are discussed accordingly until the allotted meeting time runs out.

Lean Coffee all but guarantees you'll be harnessing the passion and interest of the group, since these are all going to be things they want to talk about. It's also great because the format is extremely lightweight; you can do it with just a whiteboard, a few pens, and some sticky notes in person or with virtual tools that emulate them, such as Scrumbler.

So how might Lean Coffee work in practice?

Real-time remote team building

This is an example of utilizing the Lean Coffee pattern as intended: to foster a conversation among people who may or may not know each other, but for whom you want to try and surface commonalities and spark discussion.

First, choose a time that works best for your team as a whole. This is critical, because if your focus is on building up team cohesion, you do not want some of your team to feel left out. Lean Coffee can stretch or shrink to fill whatever time you have available, but an hour is a good amount of time.

Since you will be using virtual tools, you may want to begin with a short icebreaker exercise to get folks used to the tool and the voting process. For example, you could put up a couple of open-ended questions on sticky notes and allow people to vote on which one they want to answer as a group. Next, run the Lean Coffee exercise as documented: Set a timer, let each person drag over sticky notes and write down their topics, cluster the responses together, then discuss!

During our first run of these, we talked about everything from where we would most like to travel to what our favorite kinds of food were and what hobbies we had. There were insights and laughter, and the team came away feeling it was a really positive experience.

Asynchronous topic gathering and discussion

With the Champions, it is impossible to get everyone on a call at the same time due to the international nature of the group, everyone's personal schedules, and so on. But it's possible to run a modified, more drawn-out version of Lean Coffee for this situation.

First, walk folks through how the tool generally works and what they should do. You can do this live on a video meeting or via a prerecorded video. For topic gathering, set the time limit for something like a week to 10 days to accommodate peoples' various schedules, vacations, sick kids, and anything else that might come up. Expect that because the deadline is extended and there's no real-time component, you will need to send one or two gentle reminders to folks to participate. You should also expect that, despite your efforts, there will be some drop-off in participation.

After topics are in, you can cluster them as you do in the synchronous version. Since you can't talk through this in real time as a group, you will probably want to add headers above each cluster to explain your thinking. Some of our clusters were Best Practices, Learn about Product X, and so on. For clusters that already clearly have consensus, you can wait for the voting process or just go ahead and move those to the To Be Discussed column proactively.

Repeat the "timer" (and the gentle reminders) with the voting process, leaving a week or so for folks to get their votes in. If there are a large number of topics generated, you may want to allow each member more than one vote—up to three, for example. By taking the highest-clustering and highest-voted topics, you effectively have a backlog of meeting topics. Set a meeting schedule over as many weeks or months are needed and work your way down the topic list for each meeting.

This approach is definitely not as dynamic and exciting as the real-time version of Lean Coffee, but the basic mechanics serve the same purpose of ensuring that the members are talking about things that are relevant and interesting to them. This is also a useful approach if you, like me, need to track down one or more people (such as product managers or engineers) in order to have a useful discussion about a given topic.

Lean Coffee for remote and in-person collaboration

Lean Coffee is a versatile, fun, and engaging way of allowing relative strangers to meet each other, interact, find common ground, and learn from each other. Its simplicity allows it to be modified for a variety of remote and in-person gatherings and used for a variety of purposes.

There are dozens of other patterns like this in the Open Practice Library, so you should definitely check it out!

This idea from the Open Practice Library can re-energize your in-person and remote meetings.

Image by:

Pixabay. CC0.

Community management What to read next 7 ways anyone can contribute to Open Practice Library This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My favorite open source tool for using crontab

Fri, 05/06/2022 - 15:00
My favorite open source tool for using crontab Kevin Sonney Fri, 05/06/2022 - 03:00 Register or Login to like Register or Login to like

Automation is a hot topic right now. In my day job as a site reliability engineer (SRE), part of my remit is to automate as many repeating tasks as possible. But how many of do that in our daily, not-work, lives? This year, I am focused on automating away the toil so that we can focus on the things that are important.

One of the earliest things I learned about as a fledgling systems administrator was cron. Cron is used far and wide to do things like rotate logs, start and stop services, run utility jobs, and more. It is available on almost all Unix and Linux systems, and is something every sysadmin I know uses to help manage services and servers. Cron can run any console application or script automatically, which makes it very, very flexible.

Image by:

(Kevin Sonney, CC BY-SA 4.0)

I have used cron to fetch email, run filtering programs, make sure a service is running, interact with online games like Habitica, and a lot more.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Using cron the traditional way

To get started with cron, you can simply type crontab -e at the command line to open up an editor with the current crontab (or “cron table”) file for yourself (if you do this as root, you get the system crontab). This is where the job schedule is kept, along with when to run things. David Both has written extensively on the format of the file and how to use it, so I'm not going to cover that here. What I am going to say is that for new users, it can be a bit scary, and getting the timing set up is a bit of a pain.

Introducing crontab-ui

There are some fantastic tools out there to help with this. My favorite is crontab-ui, a web frontend written in Node.js that helps manage the crontab file. To install and start crontab-ui for personal use, I used the following commands.

# Make a backup
crontab -l > $HOME/crontab-backup
# Install Crontab UI
npm install -g crontab-ui
# Make a local database directory
mkdir $HOME/crontab-ui
# Start crontab-ui
CRON_DB_PATH=$HOME/crontab-ui crontab-ui

Once this is done, simply point your web browser at http://localhost:8000 and you'll get the crontab-ui web interface. The first thing to do is click “Get from Crontab” to load any existing jobs you may have. Then click Backup so that you can roll back any changes you make from here on out.

 

Image by:

(Kevin Sonney, CC BY-SA 4.0)

 

Adding and editing cron jobs is very simple. Add a name, the full command you want to run, and the time (using cron syntax), and save. As a bonus, you can also capture logs, and set up the mailing of job status to your email of choice.

When you're finished, click Save to Crontab.

I personally really love the logging feature. With crontab-ui, you can view logs at the click of a button, which is useful when troubleshooting.

One thing I do recommend is to not run crontab-ui all the time, at least not publically. While it does have some basic authentication abilities, it really shouldn't be exposed outside your local machine. I don't need to edit my cron jobs frequently (anymore), so I start and stop it on demand.

Try crontab-ui the next time you need to edit your crontab!

Crontab-ui is a web frontend written in Node.js that helps manage the crontab file.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Automation Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Boost the power of C with these open source libraries

Thu, 05/05/2022 - 15:00
Boost the power of C with these open source libraries Joël Krähemann Thu, 05/05/2022 - 03:00 Register or Login to like Register or Login to like

The GLib Object System (GObject) is a library providing a flexible and extensible object-oriented framework for C. In this article, I demonstrate using the 2.4 version of the library.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

The GObject libraries extend the ANSI C standard, with typedefs for common types such as:

  • gchar: a character type
  • guchar: an unsigned character type
  • gunichar: a fixed 32 bit width unichar type
  • gboolean: a boolean type
  • gint8, gint16, gint32, gint64: 8, 16, 32, and 64 bit integers
  • guint8, guint16, guint32, guint64: unsigned 8, 16, 32, and 64 bit integers
  • gfloat: an IEEE Standard 754 single precision floating point number
  • gdouble: an IEEE Standard 754 double precision floating point number
  • gpointer: a generic pointer type
Function pointers

GObject also introduces a type and object system with classes and interfaces. This is possible because the ANSI C language understands function pointers.

To declare a function pointer, you can do this:

void (*my_callback)(gpointer data);

But first, you need to assign the my_callback variable:

void my_callback_func(gpointer data)
{
  //do something
}

my_callback = my_callback_func;

The function pointer my_callback can be invoked like this:

gpointer data;
data = g_malloc(512 * sizeof(gint16));
my_callback(data);Object classes

The GObject base class consists of 2 structs (GObject and GObjectClass) which you inherit to implement your very own objects.

You embed GObject and GObjectClass as the first struct field:

struct _MyObject
{
  GObject gobject;
  //your fields
};

struct _MyObjectClass
{
  GObjectClass gobject;
  //your class methods
};

GType my_object_get_type(void);

The object’s implementation contains fields, which might be exposed as properties. GObject provides a solution to private fields, too. This is actually a struct in the C source file, instead of the header file. The class usually contains function pointers only.

An interface can’t be derived from another interface and is implemented as following:

struct _MyInterface
{
  GInterface ginterface;
  //your interface methods
};

Properties are accessed by g_object_get() and g_object_set() function calls. To get a property, you must provide the return location of the specific type. It’s recommended that you initialize the return location first:

gchar *str

str = NULL;

g_object_get(gobject,
  "my-name", &str,
  NULL);

Or you might want to set the property:

g_object_set(gobject,
  "my-name", "Anderson",
  NULL);The libsoup HTTP library

The libsoup project provides an HTTP client and server library for GNOME. It uses GObjects and the glib main loop to integrate with GNOME applications, and also has a synchronous API for use in command-line tools. First, create a libsoup session with an authentication callback specified. You can also make use of cookies.

SoupSession *soup_session;
SoupCookieJar *jar;

soup_session = soup_session_new_with_options(SOUP_SESSION_ADD_FEATURE_BY_TYPE, SOUP_TYPE_AUTH_BASIC,
  SOUP_SESSION_ADD_FEATURE_BY_TYPE, SOUP_TYPE_AUTH_DIGEST,
  NULL);

jar = soup_cookie_jar_text_new("cookies.txt",
  FALSE);    

soup_session_add_feature(soup_session, jar);
g_signal_connect(soup_session, "authenticate",
  G_CALLBACK(my_authenticate_callback), NULL);

Then you can create a HTTP GET request like the following:

SoupMessage *msg;
SoupMessageHeaders *response_headers;
SoupMessageBody *response_body;
guint status;
GError *error;

msg = soup_form_request_new("GET",
  "http://127.0.0.1:8080/my-xmlrpc",
  NULL);

status = soup_session_send_message(soup_session,
  msg);

response_headers = NULL;
response_body = NULL;

g_object_get(msg,
  "response-headers", &response_headers,
  "response-body", &response_body,
  NULL);

g_message("status %d", status);
cookie = NULL;
soup_message_headers_iter_init(&iter,
response_headers);

while(soup_message_headers_iter_next(&iter, &name, &value)){    
  g_message("%s: %s", name, value);
}

g_message("%s", response_body->data);
if(status == 200){
  cookie = soup_cookies_from_response(msg);
  while(cookie != NULL){
    char *cookie_name;
    cookie_name = soup_cookie_get_name(cookie->data);
    //parse cookies
    cookie = cookie->next;
  }
}

The authentication callback is called as the web server asks for authentication.

Here’s a function signature:

#define MY_AUTHENTICATE_LOGIN "my-username"
#define MY_AUTHENTICATE_PASSWORD "my-password"

void my_authenticate_callback(SoupSession *session,
  SoupMessage *msg,
  SoupAuth *auth,
  gboolean retrying,
  gpointer user_data)
{
  g_message("authenticate: ****");
  soup_auth_authenticate(auth,
                         MY_AUTHENTICATE_LOGIN,
                         MY_AUTHENTICATE_PASSWORD);
}A libsoup server

For basic HTTP authentication to work, you must specify a callback and server context path. Then you add a handler with another callback.

This example listens to any IPv4 address on localhost port 8080:

SoupServer *soup_server;
SoupAuthDomain *auth_domain;
GSocket *ip4_socket;
GSocketAddress *ip4_address;
MyObject *my_object;
GError *error;

soup_server = soup_server_new(NULL);
auth_domain = soup_auth_domain_basic_new(SOUP_AUTH_DOMAIN_REALM, "my-realm",
  SOUP_AUTH_DOMAIN_BASIC_AUTH_CALLBACK, my_xmlrpc_server_auth_callback,
  SOUP_AUTH_DOMAIN_BASIC_AUTH_DATA, my_object,
  SOUP_AUTH_DOMAIN_ADD_PATH, "my-xmlrpc",
  NULL);

soup_server_add_auth_domain(soup_server, auth_domain);
soup_server_add_handler(soup_server,
  "my-xmlrpc",
  my_xmlrpc_server_callback,
  my_object,
  NULL);

ip4_socket = g_socket_new(G_SOCKET_FAMILY_IPV4,
  G_SOCKET_TYPE_STREAM,
  G_SOCKET_PROTOCOL_TCP,
  &error);

ip4_address = g_inet_socket_address_new(g_inet_address_new_any(G_SOCKET_FAMILY_IPV4),
  8080);
error = NULL;
g_socket_bind(ip4_socket,
  ip4_address,
  TRUE,
  &error);
error = NULL;
g_socket_listen(ip4_socket, &error);

error = NULL;
soup_server_listen_socket(soup_server,
  ip4_socket, 0, &error);

In this example code, there are two callbacks. One handles authentication, and the other handles the request itself.

Suppose you want a web server to allow a login with the credentials username my-username and the password my-password, and to set a session cookie with a random unique user ID (UUID) string.

gboolean my_xmlrpc_server_auth_callback(SoupAuthDomain *domain,
  SoupMessage *msg,
  const char *username,
  const char *password,
  MyObject *my_object)
{
  if(username == NULL || password == NULL){
    return(FALSE);
  }

  if(!strcmp(username, "my-username") &&
     !strcmp(password, "my-password")){
    SoupCookie *session_cookie;
    GSList *cookie;
    gchar *security_token;
    cookie = NULL;

    security_token = g_uuid_string_random();
    session_cookie = soup_cookie_new("my-srv-security-token",
      security_token,
      "localhost",
      "my-xmlrpc",
      -1);

     cookie = g_slist_prepend(cookie,
       session_cookie);  
     soup_cookies_to_request(cookie,
       msg);
    return(TRUE);
  }
  return(FALSE);
}

A handler for the context path my-xmlrpc:

void my_xmlrpc_server_callback(SoupServer *soup_server,
  SoupMessage *msg,
  const char *path,
  GHashTable *query,
  SoupClientContext *client,
  MyObject *my_object)
{
  GSList *cookie;
  cookie = soup_cookies_from_request(msg);
  //check cookies
}A more powerful C

I hope my examples show how the GObject and libsoup projects give C a very real boost. Libraries like these extend C in a literal sense, and by doing so they make C more approachable. They do a lot of work for you, so you can turn your attention to inventing amazing applications in the simple, direct, and timeless C language.

GObject and libsoup do a lot of work for you, so you can turn your attention to inventing amazing applications in C.

Image by:

Image from Unsplash.com, Creative Commons Zero 

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Experiment with containers and pods on your own computer

Thu, 05/05/2022 - 15:00
Experiment with containers and pods on your own computer Seth Kenlon Thu, 05/05/2022 - 03:00 Register or Login to like Register or Login to like

In the TV show Battlestar Galactica, the titular mega-ship didn't actually do a whole lot. It served as a stalwart haven for its crew, a central point of contact for strategy and orchestration, and a safe place for resource management. However, the Caprican Vipers, one-person self-contained space vessels, went out to deal with evil Cylons and other space-borne dangers. They never just send one or two Vipers out, either. They sent lots of them. Many redundant ships with essentially the same capabilities and purpose, but thanks to their great agility and number, they always managed to handle whatever problem threatened the Battlestar each week.

If you think you're sensing a developing analogy, you're right. The modern "cloud" is big and hulking, an amalgamation of lots of infrastructure spread over a great distance. It has great power, but you'd be wasting much of its capability if you treated it like a regular computer. When you want to handle lots of data from millions of input sources, it's actually more efficient to bundle up your solution (whether that takes the form of an application, website, database, server, or something else) and send out tiny images of that solution to deal with clusters of data. These, of course, would be containers, and they're the workforce of the cloud. They're the little solution factories you send out to handle service requests, and because you can spawn as many as you need based on the requests coming in at any given time, they're theoretically inexhaustible.

Linux Containers What are Linux containers? What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: A guide to Kubernetes for SREs and sysadmins Free online course: Running containers with Red Hat technical overview eBook: Storage patterns for Kubernetes Containers at home

If you don't have a lot of incoming requests to deal with, you might wonder what benefit containers offer to you. Using containers on a personal computer does have its uses, though.

[ Download our new guide: Containers and pods 101 eBook ]

Containers as virtual environments

With tools like Podman, LXC, and Docker, you can run containers the same way you might have historically run virtual machines. Unlike a virtual machine, though, containers don't require the overhead of emulated firmware and hardware.

You can download container images from public repositories, launch a minimalist Linux environment, and use it as a testing ground for commands or development. For instance, say you want to try an application you're building on Slackware Linux. First, search for a suitable image in the repository:

$ podman search slackware

Then select an image to use as the basis for your container:

$ podman run -it --name slackware vbatts/slackware
sh-4.3# grep -i ^NAME\= /etc/os-release
NAME=SlackwareContainers at work

Of course, containers aren't just minimal virtual machines. They can be highly specific solutions for very specific requirements. If you're new to containers, it might help to start with one of the most common rites of passage for any new sysadmin: Starting up your first web server but in a container.

First, obtain an image. You can search for your favorite distribution using the podman search command or just search for your favorite httpd server. When using containers, I tend to trust the same distributions I trust on bare metal.

Once you've found an image to base your container on, you can run your image. However, as the term suggests, a container is contained, so if you just launch a container, you won't be able to reach the standard HTTP port. You can use the -p option to map a container port to a standard networking port:

$ podman run -it -p 8080:80 docker.io/fedora/apache:latest

Now take a look at port 8080 on your localhost:

$ curl localhost:8080
Apache

Success.

Learn more

Containers hold much more potential than just mimicking virtual machines. You can group them in pods, construct automated deployments of complex applications, launch redundant services to account for high demand, and more. If you're just starting with containers, you can download our latest eBook to study up on the technology and even learn to create a pod so you can run WordPress and a database.

Start exploring the essentials of container technology with this new downloadable guide.

Image by:

opensource.com

Containers Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I manage my own virtual network with ZeroTier

Wed, 05/04/2022 - 15:00
How I manage my own virtual network with ZeroTier Kevin Sonney Wed, 05/04/2022 - 03:00 Register or Login to like Register or Login to like

Automation is a hot topic right now. In my day job as a site reliability engineer (SRE), part of my remit is to automate as many repeating tasks as possible. But how many of us do that in our daily, not-work, lives? This year, I am focused on automating away the toil so that we can focus on the things that are important.

While automating everything, I ran into some difficulty with remote sites. I'm not a networking person so I started to look at my options. After researching the various virtual private networks (VPN), hardware endpoints, firewall rules, and everything that goes into supporting multiple remote sites, I was confused, grumpy, and frustrated with the complexity of it all.

More on automation Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible More articles about open source automation

Then I found ZeroTier. ZeroTier is an encrypted virtual network backbone, allowing multiple machines to communicate as if they were on a single network. The code is all open source, and you can self-host the controller or use the ZeroTierOne service with either free or paid plans. I'm using their free plan right now, and it is robust, solid, and very consistent.

Because I'm using the web service, I'm not going to go into detail about running the controller and root services. ZeroTier has a complete reference on how to do that in their documentation, and it's very good.

After creating my own virtual network in the web user interface, the client installation is almost trivial. ZeroTier has packages for APT, RPM, FreeBSD, and many other platforms, so getting the first node online takes little effort.

Once installed, the client connects to the controller service and generates a unique ID for the node. On Linux, you use the zerotier-cli command to join a network, using the zerotier-cli join NETWORKID command.

$ sudo zerotier-cli info
200 info 469584783a 1.x.x ONLINE

You can also use zerotier-cli to get a listing of connected and available nodes, change network settings, and leave networks.

Image by:

(Kevin Sonney, CC BY-SA 4.0)

After joining a network, you do have to approve access for the node, either through the web console or by making a call to the application programming interface (API). Both methods are documented on the ZeroTier site. After you have two nodes connected, connecting to each other — no matter where you are or what side of any firewalls you may be on — is exactly what you would expect if you were in the same building on the same network. One of my primary use cases is for remote access to my Home Assistant setup without needing to open up firewall ports or expose it to the internet (more on my Home Assistant setup and related services later).

One thing I did set up myself is a Beta ZeroNDS Service for internal DNS. This saved me a lot of complexity for managing my own name service or having to create public records for all my private hosts and IP addresses. I found the instructions to be very straight forward, and was able to have a DNS server for my private network up in about 5 minutes. Each client has to allow Zerotier to set the DNS, which is very simple in the GUI clients. To enable it for use on Linux clients, use:

$ sudo zerotier-cli setNETWORKID allowDNS=1

No other updates are needed as you add and remove hosts, and it "just works."

$ sudo zerotier-cli info
200 info 469584845a 1.x.y ONLINE
$ sudo zerotier-cli join
93afae596398153a 200 join OK
$ sudo zerotier-cli peers
200 peers
<ztaddr> <ver> <role> <lat> <link> <TX> <RX> <path>
61d294b9cb - PLANET 112 DIRECT 7946 2812 50.7.73.34/9993
62f865ae71 - PLANET 264 DIRECT 7946 2681 50.7.76.38/9993
778cde7190 - PLANET 61 DIRECT 2944 2901 103.195.13.66/9993
93afae5963 1.x LEAF 77 DIRECT 2945 2886 35.188.31.177/41848
992fcf1db7 - PLANET RECT 79124 DI47 2813 195. 181.173.159/9993

I've barely scratched the surface of the features here. ZeroTier also allows for bridging between ZeroTier networks, advanced routing rules, and a whole lot more. They even have a Terraform provider and a listing of Awesome Zerotier Things. As of today, I'm using ZeroTier to connect machines across four physical sites, three of which are behind NAT firewalls. Zerotier is simple to set up, and almost completely painless to manage.

ZeroTier is an encrypted virtual network backbone, allowing multiple machines to communicate as if they were on a single network.

Image by:

Jonas Leupe on Unsplash

Networking Automation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Automate and manage multiple devices with Remote Home Assistant

Tue, 05/03/2022 - 15:00
Automate and manage multiple devices with Remote Home Assistant Kevin Sonney Tue, 05/03/2022 - 03:00 Register or Login to like Register or Login to like

Automation is a hot topic right now. In my day job as an SRE part of my remit is to automate as many repeating tasks as possible. But how many of us do that in our daily, not-work, lives? This year, I am focused on automating away the toil so that we can focus on the things that are important.

There are a lot of guides out there on Setting Up Home Assistant, but what if you have multiple Home Assistant installations (like I do), and want to display and control them all from a single, central Home Assistant?

There is an amazing add-on called Remote Home Assistant (https://github.com/custom-components/remote_homeassistant) that makes this an absolute breeze. And it really helps me manage and automate things without having to set up any complex software (although I have done this with MQTT in the past — it was a challenge).

Image by:

(Kevin Sonney, CC BY-SA 40)

More on automation Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible More articles about open source automation

The easiest way to set up Remote Home Assistant is to install the Home Assistant Community Store (HACS) on both HASS installations. HACS is an absolutely massive collection of third-party add-ons for Home Assistant. The instructions are very straight forward, and cover most use cases — including using  Home Assistant OS (which is my central node), and Home Assistant Core (one of my remote nodes). It installs as a new Integration, so you can add it like any other integration. You must be able to log into GitHub for HACS to work, but HACS walks you through that as part of the configuration flow. After it's complete, it loads all the known add-on repositories. To see the status of it, click the new HACS option in the navigation menu on the left.

Image by:

(Kevin Sonney, CC BY-SA 40)

Select Integrations and search for Remote Home Assistant when it has completed loading all the store information. Install the add-on with the Install button, and restart Home Assistant. When the restart is complete, you have a new custom integration available, which can be added like any other.

On the remote node (“lizardhaus”), you need to generate a long-lived token, and then add the Remote Home Assistant integration. Select Setup as remote node and that's all you need to do.

On the central node (“homeassistant”), the configuration flow is different. Add the integration as before, but do not create an access token. Select Add a remote node and click Submit. You are asked for the site name, the address (which can be a name or an IP address), the port, and the access token generated on the remote node. You can enable or disable SSL (and I STRONGLY recommend setting up SSL on the remote if it's exposed to the internet). After it connects, it prompts you for additional information, such as a prefix for the entities from the remote node (I like to include a trailing "_" character), what entities to fetch, and what to include and exclude. You can get events that can be triggered remotely, like turning on and off switches.

Image by:

(Kevin Sonney, CC BY-SA 40)

After that, the remote items appear to home assistant like any other item. And you can control them in the same way, as long as you added the correct triggers and entities.

Remote Home Assistant is really useful if you have devices like Bluetooth Low Energy plant sensors that are too far away from the main HASS machine. You can place a Raspberry Pi with HassOS near the plants then use Remote Home Assistant to put them in your central dashboard, and get an alert when they need watering, and so on. Overall, linking together multiple Home Assistant configurations is surprisingly easy, and VERY helpful.

Link together multiple Home Assistant devices with this centralized control panel.

Image by:

27707 via Pixabay, CC0. Modified by Jen Wike Huger.

Automation Tools What to read next Why choose open source for your home automation project This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use the Bacula GUI for backup and recovery

Tue, 05/03/2022 - 15:00
How I use the Bacula GUI for backup and recovery Rob Morrison Tue, 05/03/2022 - 03:00 Register or Login to like Register or Login to like

Today, when best practices for backup and recovery are more important than ever before, it's good to know that high-end fully open source enterprise backup solutions exist for even the largest organizations. Perhaps the most powerful open source solution in its class is Bacula, a highly scalable software for backup, recovery, and data verification. It is a mature yet still significantly developing project used by MSPs, defense organizations, ISVs, and e-commerce companies worldwide and runs on many different Linux flavors. Bacula has a thriving community, and many Linux enthusiasts use it to provide a strong level of data protection.

With the many severe disruptions that ransomware causes today, it's critical that the client system being backed up is never aware of storage targets and has no credentials for accessing them. This is true in Bacula's case, and in addition:

  • Storage and Storage Deamon hosts are dedicated systems, strictly secured, allowing only Bacula-related traffic and admin access and nothing else.
  • Bacula's "Director" (core management module) is a dedicated system with the same restrictive access.

Bacula has plenty of additional configuration options to tune backups to user needs. It functions in networks and can back up both remote and local hosts. For first-time users, it can look complex, but fortunately, the Bacula Project also provides the Baculum web interface to ease administration. Many Linux users are more than happy to rely on Bacula's command-line interface to exploit its considerable range of capabilities, but sometimes it's good to have an effective GUI, too. That's where the open source Baculum comes in.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Baculum

Baculum's installation process is reasonably simple because its repositories provide binary packages for popular Linux distributions. After installation, you have access to two wizards:

  • The Baculum API - a REST API component for working with Bacula data.
  • The Baculum Web component - the web interface itself.

The Baculum API is installed on hosts with Bacula components which you manage from the web interface level. Baculum Web is usually one instance that connects all Baculum API hosts and makes it possible to manage all of them. This architecture fits well with the Bacula network architecture because you can manage all Bacula hosts from one interface. It's important to know that the web interface does not store any Bacula-specific configuration from any host but manages them by sending API requests instead. When you modify the interface or run Bacula actions, they are done in real-time. When you click on the save configuration button, the modification is done simultaneously on the targeted hosts.

Below is a sample Bacula and Baculum topology.

Image by:

(Rob Morrison, CC BY-SA 4.0)

One disadvantage of this approach is that you need to install one Baculum API instance on each Bacula host that you want to manage. If there are many servers to back up, it is possible to automate the installation process using an application-deployment tool like Ansible.

In my case, I have a much simpler topology with only one host managed by Baculum. My topology looks like the one below.

Image by:

(Rob Morrison, CC BY-SA 4.0)

You can decide what Bacula resources to share on each Baculum API host. You can set the API hosts to do configuration work, access the Bacula catalog database, run Bacula console commands, or any combination.

After installing the web interface in the Bacula environment, you see a dashboard page like this:

Image by:

(Rob Morrison, CC BY-SA 4.0)

Create a backup job

To define a new backup job, go to the job page to see some wizards for creating backup, copy, or migrate jobs using a custom job form. For this demonstration, I chose the backup job, which displays the first wizard step:

Image by:

(Rob Morrison, CC BY-SA 4.0)

First, type the new job name and optional description. In the second step, decide what to backup. For this example, I chose a Bacula client and FileSet, which defines the paths to be backed up. Usually, in this window, there aren't any FileSet options to choose from yet, but you can create one with the Add new fileset button in the wizard. To define paths, I decided to browse the client filesystem and select paths in the drag and drop browser, as in the image below.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Once the FileSet is ready, the next step is to select where to save the backed-up data for this job. Select a storage location and a volume pool.

Image by:

(Rob Morrison, CC BY-SA 4.0)

As with FileSets, you have an option to create a new pool. In this example, I chose an existing volume pool.

In the next step are job-specific options like choosing the job level (full, incremental, differential, etc.), job priority, and a few other settings.

Image by:

(Rob Morrison, CC BY-SA 4.0)

On the next wizard page, specify when to run this backup job. Backups are usually run periodically, and here you can choose a schedule for this job. If you don't have a schedule, you can create it in this interface:

Image by:

(Rob Morrison, CC BY-SA 4.0)

The last wizard step is just a summary of all values selected in the previous steps.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Review all the values, and if they look correct, create the new job.

Run the backup

OK, you have a new backup job. To run the initial backup, you may choose to start it manually using the Run job button. There is a useful capability in the Run job window to estimate a job before running it. Run this estimation to know in advance how many files and how many bytes will be backed up by this job.

Image by:

(Rob Morrison, CC BY-SA 4.0)

After running the job, you move to a job view page where you can see backup progress from the client's perspective.

Image by:

(Rob Morrison, CC BY-SA 4.0)

You can track job status from three places on the interface:

  • The Bacula client (shown above).
  • The Bacula director component side.
  • The storage daemon perspective.

Here you can see the job progress on the director and storage daemon side:

Image by:

(Rob Morrison, CC BY-SA 4.0)

Image by:

(Rob Morrison, CC BY-SA 4.0)

The backup job completes.

Restore data

Of course, you must be able to restore the backed-up data. Baculum provides a Restore wizard in the primary sidebar menu. After opening it, you see a backup client selection to which you can restore the data.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Select the client and go to the second step. Here you see all backups from that client. Your backup is at the top, so it is easy to choose. However, if you want to find a past backup, search the backups data grid. There is also an option to find a backup by filename, with or without a path.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Select the backup and go to file selection on the third restore wizard step. Here, in the file browser, choose directories and files to restore. The browser also has an area to select a specific file version if it exists in other backups.

Image by:

(Rob Morrison, CC BY-SA 4.0)

The next wizard step defines the destination where the restore will save the data. By default, the client from which the backup originates is selected, but you can change that to restore to a different host than the original. You can also define an absolute path on the client to restore the data. The media required to complete this restore is displayed. This is very useful for a backup tape device operator to prepare for the restore job. Personally, I use disk media, and my volumes are available for the storage daemon all the time.

Image by:

(Rob Morrison, CC BY-SA 4.0)

The next step offers the restore options, such as replacing a policy for existing files on the filesystem or file relocation fields. I keep them untouched and go to the summary step before running the restore.

Image by:

(Rob Morrison, CC BY-SA 4.0)

In the restore job—just like in the backup job—you see the running restore job's progress. After completion, there is a summary of the entire process.

Image by:

(Rob Morrison, CC BY-SA 4.0)

That's just about it. The backup and a restore are done. The process may be a little simpler with other tools, but Bacula offers Linux enthusiasts hundreds of very useful options. This limits how much you can simplify the interface, and most users of Bacula don't want that.

Copy jobs

Besides doing traditional backup and restore jobs, Bacula also provides a few other job types. One of them is Copy job, which copies backups between storage devices from one pool of volumes to another. One storage device can be a disk, and another can be a tape or tape library. Copy job reads data from file volumes and sends it to tape devices for saving on magnetic tapes. Bacula users can configure a backup D2D2T strategy (disk-to-disk-to-tape). Source and destination storage can be of different types (disk and tape), but it works just as well when copying backup jobs between the same device types.

Baculum has full support for copy jobs, including configuring copy jobs and ending with restoring data directly from copy jobs. Configure a copy job using the copy job wizard visible in the image below.

Image by:

(Rob Morrison, CC BY-SA 4.0)

After typing the new copy job name, choose the source storage and source volume pool. This is the storage that reads data when the copy job runs.

Image by:

(Rob Morrison, CC BY-SA 4.0)

The third wizard step specifies how to copy jobs. In other words, you can define the selection criteria used for choosing the backups that will be copied. You can select backups by patterns like:

  • Job name
  • Client
  • Volume
  • Smallest volume in the pool
  • Oldest volume in the pool
  • SQL query
  • Copy all uncopied jobs so far from the pool

In this example, I chose a selection by job name.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Select the destination storage and pool in the next step. This storage writes backups to the destination pool when you run the copy job.

Image by:

(Rob Morrison, CC BY-SA 4.0)

In the penultimate step are a couple of options, such as the maximum number of spawned jobs. You can also set a schedule to run the copy job periodically.

Image by:

(Rob Morrison, CC BY-SA 4.0)

After saving the wizard, run the copy job in the same place where you started the backup job. You can see the live updated job log output.

Image by:

(Rob Morrison, CC BY-SA 4.0)

Wrap up

Done! You have performed a backup job, restored a job, and created a copy job.

There are two Baculum functions that I think many folks will find useful.

First, its simple interface enables the user to administer Bacula from any mobile device. This can be crucial for cases when you are outside the office and somebody from the organization sends a text message like: "Hey! I accidentally deleted an important report file and need it urgently. Are you able to restore it to my computer?" You could do this restore using a mobile phone and the same wizard steps described above.

The second important function is its multi-user interface with several authentication methods (local user, basic authentication, LDAP, etc.). It enables company employees to use Baculum to backup and restore their own resources without requiring access to any other utilities. You can customize the role-based access control interface for each group of users.

Of course, these options are just the tip of the iceberg regarding Bacula's capabilities with Baculum. Baculum really is about being configurable. I hope you can enjoy its benefits and the empowerment it brings you to make your data safer and your life easier!

Baculum is an open source web application for using Bacula's range of backup and restore jobs.

Linux Tools What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to make community recognition more inclusive

Mon, 05/02/2022 - 15:00
How to make community recognition more inclusive Ray Paik Mon, 05/02/2022 - 03:00 1 reader likes this 1 reader likes this

Giving recognition to someone for a job well done is one of my favorite duties as a community manager. Not only do I get to thank someone, but I also have a chance to highlight a role model for the rest of the community. Recognition also provides an opportunity to celebrate an achievement, like someone helping new community members with onboarding, reducing technical debt, or contributing an exciting new feature.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

However, the methods used to identify contributions and recognize them can have unintended consequences. For example, sometimes community managers use charts like the following during recognitions, emphasizing pull requests (PRs) and contributions to code repositories.

Image by:

(Ray Paik, CC BY-SA 4.0)

Image by:

(Ray Paik, CC BY-SA 4.0)

Three problems arise with using these types of data for recognition. First, there's too much focus on contributions in code repositories. In the early days, open source projects attracted mostly developers, so naturally a lot of collaboration was done around code. Now, an increasing number of nondevelopers are participating in communities (for example, through user groups, meetups, user-generated content), and they will be doing most of their work outside repositories. Those contributions don't register on a chart like Annual Merged PRs.

Second, with too much focus on metrics (that is, things that can be measured quantitatively), you may end up rewarding quantity over quality—or even impact. In the Top Contributing Orgs chart above, larger organizations have a clear advantage over smaller organizations, as they have more people available. By recognizing larger organizations for their volume of work or contributions, you may inadvertently make people from smaller organizations feel disenfranchised.

Finally, even though it's not the intent, some people may view these data as a ranking of the importance of individual community members or organizations.

For all these reasons, it's best to avoid relying solely on metrics for community recognition.

Make recognition more meaningful

What are some more inclusive ways to approach community recognition and acknowledge a variety of contribution types? Communication channels like Discord, Internet Relay Chat (IRC), mailing lists, or Slack provide good clues as to which community members are active and what they're passionate about. For example, I'm always amazed to find members who are very generous in answering others' questions and helping newcomers. These contributions don't show up in community dashboards, but it's important to recognize this work and let everyone know that this contribution is valued.

Speaking of community dashboards, they're certainly important tools in open source communities. However, I caution against spending too much time building dashboards. Sooner or later, you will find that not everything is easily measurable, and even if you find a way to quantify something, it often lacks context.

One of the things I do to get more context around the contributions is to schedule coffee chats with community members. These conversations give me an opportunity to learn about why they decided to make the contribution, how much work was involved, others who were also involved, and so on.

When I talk to these members for the first time, I often hear that they feel it's important to find ways to give back to the community, and they're looking for ways to help. Some are even apologetic because they cannot contribute code, and I have to reassure them that code is no longer the only thing that matters in open source. Sometimes these conversations allow me to make connections among community members in the same city or industry, or to find other common interests. Fostering these connections helps strengthen a sense of belonging.

Make recognition more impactful

In addition to finding more activities to recognize, you can also present recognition in ways that have a bigger effect. For example, be timely with kudos when you see a good contribution. A quick DM with a simple thank you can be more effective than something more formal a month or two later. Many people, myself included, tend to stress over sending the right merchandise with recognition, but it's important to remember that swag is not the main motivator for community members' contributions. Recognizing good work and making an effort to reach out goes a long way in making people feel appreciated.

It's also a good idea to give members an opportunity to participate in the recognition process. Once a community reaches a certain size, it's difficult to know everything that's happening. Having a simple nomination form that community members can submit will raise awareness of good contributions that others may not have been aware of. If your community has formal awards for members—for example, awards presented at an annual conference or meetups—involve members in the nomination and voting process. This not only provides an opportunity for more people to participate in the process, but the awards will also be more meaningful to recipients since they come from their peers.

Finally, giving recognition is a vital opportunity to get to know community members and build relationships. Sometimes the recognition process can feel almost transactional: "You did X, so we're going to award you with Y." Taking the time to do personal outreach along with the award will make community members feel more appreciated and strengthen their sense of belonging.

Recognitions build community health

There's a lot of work to be done to improve diversity, inclusion, and belonging in open source communities. Better community recognitions play an essential role in these efforts. Ensuring that all contributions are valued and that everyone feels like they have a home where they're appreciated will encourage members to stay engaged in the community.

Look beyond metrics to ensure that all contributions are valued. When everyone feels like they have a home where they're appreciated, community members will be encouraged to stay engaged.

Image by:

Opensource.com

Community management Diversity and inclusion What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

10 Argo CD best practices I follow

Mon, 05/02/2022 - 15:00
10 Argo CD best practices I follow Noaa Barki Mon, 05/02/2022 - 03:00 Register or Login to like Register or Login to like

My DevOps journey kicked off when I started developing Datree, an open source command that aims to help DevOps engineers to prevent Kubernetes misconfigurations from reaching production. One year later, seeking best practices and more ways to prevent misconfigurations became my way of life.

This is why when I first learned about Argo CD, the thought of using Argo without knowing its pitfalls and complications simply didn't make sense to me. After all, it's probable that configuring it incorrectly can easily cause the next production outage.

In this article, I'll explore some of the best practices of Argo that I've found, and show you how to validate custom resources against these best practices.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Disallow providing an empty retryStrategy

Project: Argo Workflows

Best practice: A user can specify a retryStrategy that dictates how errors and failures are retried in a workflow. Providing an empty retryStrategy (retryStrategy: {}) causes a container to retry until completion, and eventually causes out-of-memory (OOM) issues.

Ensure that Workflow pods are not configured to use the default service account

Project: Argo Workflows

Best practice: All pods in a workflow run with a service account, which can be specified in the workflow.spec.serviceAccountName. If omitted, Argo uses the default service account of the workflow's namespace. This provides the workflow (the pod) the ability to interact with the Kubernetes API server. This allows attackers with access to a single container to abuse Kubernetes by using the AutomountServiceAccountToken. If by any chance, the option for AutomountServiceAccountToken was disabled, then the default service account that Argo uses won't have any permissions, and the workflow fails.

It's recommended to create dedicated user-managed service accounts with the appropriate roles.

Set the label 'part-of: argocd' in ConfigMaps

Project: Argo CD

When installing Argo CD, its atomic configuration contains a few services and configMaps. For each specific kind of ConfigMap and Secret resource, there is only a single supported resource name (as listed in the above table). If you need to merge things, do it before creating them. It's important to annotate your ConfigMap resources using the label app.kubernetes.io/part-of: argocd, otherwise, Argo CD isn't able to use them.

Disable 'FailFast=false' in DAG

Project: Argo Workflows

Best practice: As an alternative to specifying sequences of steps in Workflow, you can define the workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. The DAG logic has a built-in fail fast feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes has failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true. If set to false, it allows a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG.

Ensure Rollout pause step has a configured duration

Project: Argo Rollouts

Best practice: For every Rollout, you can define a list of steps. Each step can have one of two fields: setWeight and pause. The setWeight field dictates the percentage of traffic that should be sent to the canary, and the pause literally instructs the rollout to pause.

Under the hood, the Argo controller uses these steps to manipulate the ReplicaSets during the rollout. When the controller reaches a pause step for a rollout, it adds a PauseCondition struct to the .status.PauseConditions field. If the duration field within the pause struct is set, the rollout does not progress to the next step until it has waited for the value of the duration field. However, if the duration field has been omitted, the rollout might wait indefinitely until the added pause condition is removed.

Specify Rollout's revisionHistoryLimit

Project: Argo Rollouts

Best practice: The .spec.revisionHistoryLimit is an optional field that indicates the number of old ReplicaSets, which should be retained in order to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to roll back to that revision of Deployment.

By default, 10 old ReplicaSets are kept. However, it's ideal value depends on the frequency and stability of new Deployments. More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas are removed. In this case, a new Deployment rollout cannot be undone, because its revision history is removed.

Set scaleDownDelaySeconds to 30s

Project: Argo Rollouts

Best practice: When the rollout changes the selector on service, there's a propagation delay before all the nodes update their IP tables to send traffic to the new pods instead of the old. Traffic is directed to the old pods if the nodes have not been updated yet during this delay. In order to prevent packets from being sent to a node that killed the old pod, the rollout uses the scaleDownDelaySeconds field to give nodes enough time to broadcast the IP table changes. If omitted, the Rollout waits 30 seconds before scaling down the previous ReplicaSet.

It's recommended to set scaleDownDelaySeconds to a minimum of 30 seconds in order to ensure that the IP table propagates across the nodes in a cluster. The reason is that Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds.

Ensure retry on both Error and TransientError

Project: Argo Workflows

Best practice: retryStrategy is an optional field of the Workflow CRD, that provides controls for retrying a workflow step. One of the fields of retryStrategy is _retryPolicy, which defines the policy of NodePhase statuses to be retried (NodePhase is the condition of a node at the current time). The options for retryPolicy can be either: Always, OnError, or OnTransientError. In addition, the user can use an expression to control more of the retries.

What's the catch?

  • retryPolicy=Always is too much: Letting the user retry on system-level errors (for instance, the node dying or being preempted), but not on errors occurring in user-level code since these failures indicate a bug. In addition, this option is more suitable for long-running containers than workflows which are jobs.
  • retryPolicy=OnError doesn't handle preemptions: Using retryPolicy=OnError handles some system-level errors like the node disappearing or the pod being deleted. However, during graceful Pod termination, the kubelet assigns a Failed status and a Shutdown reason to the terminated Pods. As a result, node preemptions result in node status Failure instead of Error, so preemptions aren't retried.
  • retryPolicy=OnError doesn't handle transient errors: Classifying a preemption failure message as a transient error is allowed. However, this requires retryPolicy=OnTransientError. (see also TRANSIENT_ERROR_PATTERN).

I recommend setting retryPolicy: "Always" and use the following expression:

lastRetry.status == "Error" or (lastRetry.status == "Failed" and asInt(lastRetry.exitCode) not in [0])Ensure progressDeadlineAbort set to true

Project: Argo Rollouts

Best practice: A user can set progressDeadlineSeconds, which states the maximum time in seconds in which a rollout must make progress during an update before it is considered to be failed.

If rollout pods get stuck in an error state (for example, image pull back off), the rollout degrades after the progress deadline is exceeded but the bad replica set or pods aren't scaled down. The pods would keep retrying and eventually the rollout message would read ProgressDeadlineExceeded: The replicaset has timed out progressing. To abort the rollout, set both progressDeadlineSeconds and progressDeadlineAbort, with progressDeadlineAbort: true.

Ensure custom resources match the namespace of the ArgoCD instance

Project: Argo CD

Best practice: In each repository, all Application and AppProject manifests should match the same metadata.namespace. If you deployed Argo CD using the typical deployment, Argo CD creates two ClusterRoles and ClusterRoleBinding, that reference the argocd namespace by default. In this case, it's recommended not only to ensure that all Argo CD resources match the namespace of the Argo CD instance, but also to use the argocd namespace. Otherwise, you need to make sure to update the namespace reference in all Argo CD internal resources.

However, if you deployed Argo CD for external clusters (in Namespace Isolation Mode), then instead of ClusterRole and ClusterRoleBinding, Argo creates Roles and associated RoleBindings in the namespace where Argo CD was deployed. The created service account is granted a limited level of access to manage, so for Argo CD to be able to function as desired, access to the namespace must be explicitly granted. In this case, you should make sure all resources, including Application and AppProject, use the correct namespace of the ArgoCD instance.

Now What?

I'm a GitOps believer, and I believe that every Kubernetes resource should be handled exactly the same as your source code, especially if you are using Helm or Kustomize. So, the way I see it, you should automatically check your resources on every code change.

You can write your policies using languages like Rego or JSONSchema and use tools like OPA ConfTest or different validators to scan and validate our resources on every change. Additionally, if you have one GitOps repository, then Argo plays a great role in providing a centralized repository for you to develop and version control your policies.

[ Download the eBook: Getting GitOps: A practical platform with OpenShift, Argo CD, and Tekton ]

How Datree works

The Datree CLI runs automatic checks on every resource that exists in a given path. After the check is complete, Datree displays a detailed output of any violation or misconfiguration it finds, with guidelines on how to fix it:

Scan your cluster with Datree $ kubectl datree test -- -n argocd

You can use the Datree kubectl plugin to validate your resources after deployments, get ready for future version upgrades and monitor the overall compliance of your cluster.

Scan your manifests in the CI

In general, Datree can be used in the CI, as a local testing library, or even as a pre-commit hook. To use datree, you first need to install the command on your machine, and then execute it with the following command:

$ datree test -.datree/k8s-demo.yaml >> File: .datree/k8s-demo.yaml
[V] YAML Validation
[V] Kubernetes schema validation
[X] Policy check

X Ensure each container image has a pinned (tag) version [1 occurrence]
- metadata.name: rss-site (kind: Deployment)
!! Incorrect value for key 'image' - specify an image version
X Ensure each container has a configured memory limit [1 occurrence]
- metadata.name: rss-site (kind: Deployment)
!! Missing property object 'limits.memory' - value should be within the accepter

X Ensure workload has valid Label values [1 occurrence]
- metadata.name: rss-site (kind: Deployment)
!! Incorrect value for key(s) under 'labels' - the vales syntax is not valid

X Ensure each container has a configured liveness probe [1 occurrence]
- metadata.name: rss-site (kind: Deployment)
!! Missing property object 'livenessProbe' - add a properly configured livenessP:

[...]

As I mentioned above, the way the CLI works is that it runs automatic checks on every resource that exists in the given path. Each automatic-check includes three steps:

  1. YAML validation: Verifies that the file is a valid YAML file.
  2. Kubernetes schema validation: Verifies that the file is a valid Kubernetes/Argo resource.
  3. Policy check: Verifies that the file is compliant with your Kubernetes policy (Datree built-in rules by default).
Summary

In my opinion, governing policies are only the beginning of achieving reliability, security, and stability for your Kubernetes cluster. I was surprised to find that centralized policy management might also be a key solution for resolving the DevOps and Development deadlock once and for all.

Check out the Datree open source project. I highly encourage you to review the code and submit a PR, and don't hesitate to reach out.

This article originally appeared on the Datree blog and has been republished with permission. 

I'll show you how to validate custom resources against these Argo best practices for DevOps engineers.

Kubernetes DevOps What to read next Prevent Kubernetes misconfigurations during development with this open source tool This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Parsing data with strtok in C

Sat, 04/30/2022 - 15:00
Parsing data with strtok in C Jim Hall Sat, 04/30/2022 - 03:00 Register or Login to like Register or Login to like

Some programs can just process an entire file at once, and other programs need to examine the file line-by-line. In the latter case, you likely need to parse data in each line. Fortunately, the C programming language has a standard C library function to do just that.

The strtok function breaks up a line of data according to "delimiters" that divide each field. It provides a streamlined way to parse data from an input string.

Reading the first token

Suppose your program needs to read a data file, where each line is separated into different fields with a semicolon. For example, one line from the data file might look like this:

102*103;K1.2;K0.5

In this example, store that in a string variable. You might have read this string into memory using any number of methods. Here's the line of code:

char string[] = "102*103;K1.2;K0.5";

Once you have the line in a string, you can use strtok to pull out "tokens." Each token is part of the string, up to the next delimiter. The basic call to strtok looks like this:

#include
char *strtok(char *string, const char *delim);

The first call to strtok reads the string, adds a null (\0) character at the first delimiter, then returns a pointer to the first token. If the string is already empty, strtok returns NULL.

#include
#include

int
main()
{
  char string[] = "102*103;K1.2;K0.5";
  char *token;

  token = strtok(string, ";");

  if (token == NULL) {
    puts("empty string!");
    return 1;
  }

  puts(token);

  return 0;
}

This sample program pulls off the first token in the string, prints it, and exits. If you compile this program and run it, you should see this output:

102*103

102*103 is the first part of the input string, up to the first semicolon. That's the first token in the string.

Note that calling strtok modifies the string you are examining. If you want the original string preserved, make a copy before using strtok.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Reading the rest of the string as tokens

Separating the rest of the string into tokens requires calling strtok multiple times until all tokens are read. After parsing the first token with strtok, any further calls to strtok must use NULL in place of the string variable. The NULL allows strtok to use an internal pointer to the next position in the string.

Modify the sample program to read the rest of the string as tokens. Use a while loop to call strtok multiple times until you get NULL.

#include
#include

int
main()
{
  char string[] = "102*103;K1.2;K0.5";
  char *token;

  token = strtok(string, ";");

  if (token == NULL) {
    puts("empty string!");
    return 1;
  }

  while (token) {
    /* print the token */
    puts(token);

    /* parse the same string again */
    token = strtok(NULL, ";");
  }

  return 0;
}

By adding the while loop, you can parse the rest of the string, one token at a time. If you compile and run this sample program, you should see each token printed on a separate line, like this:

102*103
K1.2
K0.5Multiple delimiters in the input string

Using strtok provides a quick and easy way to break up a string into just the parts you're looking for. You can use strtok to parse all kinds of data, from plain text files to complex data. However, be careful that multiple delimiters next to each other are the same as one delimiter.

For example, if you were reading CSV data (comma-separated values, such as data from a spreadsheet), you might expect a list of four numbers to look like this:

1,2,3,4

But if the third "column" in the data was empty, the CSV might instead look like this:

1,2,,4

This is where you need to be careful with strtok. With strtok, multiple delimiters next to each other are the same as a single delimiter. You can see this by modifying the sample program to call strtok with a comma delimiter:

#include
#include

int
main()
{
  char string[] = "1,2,,4";
  char *token;

  token = strtok(string, ",");

  if (token == NULL) {
    puts("empty string!");
    return 1;
  }

  while (token) {
    puts(token);
    token = strtok(NULL, ",");
  }

  return 0;
}

If you compile and run this new program, you'll see strtok interprets the ,, as a single comma and parses the data as three numbers:

1
2
4

Knowing this limitation in strtok can save you hours of debugging.

Using multiple delimiters in strtok

You might wonder why the strtok function uses a string for the delimiter instead of a single character. That's because strtok can look for different delimiters in the string. For example, a string of text might have spaces and tabs between each word. In this case, you would use each of those "whitespace" characters as delimiters:

#include
#include

int
main()
{
  char string[] = "  hello \t world";
  char *token;

  token = strtok(string, " \t");

  if (token == NULL) {
    puts("empty string");
    return 1;
  }

  while (token) {
    puts(token);
    token = strtok(NULL, " \t");
  }

  return 0;
}

Each call to strtok uses both a space and tab character as the delimiter string, allowing strtok to parse the line correctly into two tokens.

Wrap up

The strtok function is a handy way to read and interpret data from strings. Use it in your next project to simplify how you read data into your program.

The strtok function is a handy way to read and interpret data from strings. Use it in your next project to simplify how you read data into your program.

Image by:

kris krüg

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages