opensource.com

Subscribe to opensource.com feed
Updated: 1 hour 38 min ago

An open source developer's guide to systems programming

Fri, 04/29/2022 - 15:00
An open source developer's guide to systems programming Alex Bunardzic Fri, 04/29/2022 - 03:00 Register or Login to like Register or Login to like

Programming is an activity that helps implement a model. What is a model? Typically, programmers model real-world situations, such as online shopping.

When you go shopping in the real world, you enter a store and start browsing. When you find items you'd like to purchase, you place them into the shopping cart. Once your shopping is done, you go to the checkout, the cashier tallies up all the items, and presents you with the total. You then pay and leave the store with your newly purchased items.

Thanks to the advancements in technology, you can now accomplish the same shopping activities without traveling to a physical store. You achieve that convenience by having a team of software creators model actual shopping activities and then simulate those activities using software programs.

Such programs run on information technology systems composed of networks and other computing infrastructure. The challenge is to make a reliable system in the presence of failures.

Why failures?

The only way to offer virtual capabilities such as online shopping is to implement the model on a network (i.e., the internet). One problem with networks is that they are inherently unreliable. Whenever you plan to implement a network app, you must consider the following pervasive problems:

  • The network is not reliable.
  • The latency on the network is not zero.
  • The bandwidth on the network is not infinite.
  • The network is not secure.
  • Network topology tends to change.
  • The transport cost on the network is not zero.
  • The network is not homogenous.
  • "Works on my machine" is not a proof that the app is actually functional.

As can be seen from the above list, there are many reasons to expect failures when planning to launch an app or service.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java What is a system?

You depend on a system to support the app. So, what is a system?

A system is something that stands together, meaning it's a composition of programs that offer services to other programs. Such a design is loosely coupled. It is distributed and decentralized (i.e., it does not have global supervision/management).

What is a reliable system?

Consider the attributes that make up a reliable system:

  • A reliable system is a system that is always up and running. Such a system is capable of graceful degradation, meaning that when performance starts to degrade, the system will not suddenly stop working.
  • A reliable system is not only always up and running, but it is also capable of progressive enhancement. As the demand for the system's capabilities increases, a reliable system scales to meet the needs.
  • A reliable system is also easily maintainable without expensive changes.
  • A reliable system is low-risk. It is safe and simple to deploy changes to such a system, either by rolling back or forward.
Everything built eventually exceeds the ability to understand it

Every successful system was created from a much simpler design. As systems are enhanced and embellished, they eventually reach a point where their complexity cannot be easily understood.

Consider a system that consists of many moving parts. As the number of moving parts in the system increases, the degree of interdependence between those moving parts also increases (Figure 1).

Image by:

(Alex Bunardzic, CC BY-SA 4.0)

It is only during the early stages of the growth of that system that people can perform a formal analysis of the system. After a certain point of system complexity, humans can only reason about the system by applying statistical analysis.

There is a gap between formal analysis and statistical analysis (Figure 2).

Image by:

(Alex Bunardzic, CC BY-SA 4.0)

How to program a system?

Developers know how to write useful apps, but they must also know how to program a system that enables the app to function on the network.

It turns out that there doesn't seem to be a system programming language available. While developers may know many programming languages (e.g., C, Java, C++, C#, Python, Ruby, JavaScript, etc.), all those languages specialize in modeling and emulating the functioning of an app. But how does one model system functionality?

Look at how the system is assembled. Basically, in a system, programs talk to each other. How do they do that?

They communicate over a network. Since there cannot be a system without two or more programs talking to each other, it is clear that the only way to program a system is to program a network.

Before looking more closely at how to program a network, I will examine the main problem with networks—failure.

Failures are at the system level

How do failures occur in a system? One way is when one or more programs suddenly becomes unavailable.

That failure has nothing to do with programming errors. Actually, programming errors are not really errors—they are bugs to be squashed!

A network is basically a chain, and as everyone knows, a chain is only as strong as its weakest link.

Image by:

(Alex Bunardzic, CC BY-SA 4.0)

When a link breaks (i.e., when one of the programs becomes unavailable), it is critical to prevent that outage from bringing the entire system down.

How do administrators do that? They provide an abstraction boundary that stops the propagation of errors. I will now examine ways to provide such an abstraction boundary inside the system. Doing that amounts to programming a system.

Best practices in system programming

It is very important to design programs and services to meet the needs of the machines. It is a common mistake to create programs and services to serve human needs. But when doing systems programming, such an approach is incorrect.

There is a fundamental difference between designing services for machines versus humans. Machines do not need operational interfaces. However, humans cannot consume services without a functional interface.

What machines need is programming interfaces. Therefore, when doing systems programming, focus entirely on the application programming interfaces (APIs). It will be easy to bolt operational interfaces on top of the already implemented programming interfaces, so do not rush into creating operational interfaces first.

It is also important to build only simple services. This may seem unreasonable at first, but once you understand that simple services are easily composable into more complex ones, it makes more sense.

Why are simple services so essential when doing systems programming? By focusing on simple services, developers minimize the risk of slipping into premature abstraction. It becomes impossible to over-abstract such simple services. The result is a system component that is easy to make, reason about, deploy, fix, and replace.

Developers must avoid the temptation to turn the service into a monolith. Abstain from doing that by refusing to add functionalities and features. Furthermore, resist turning service into a stack. When other users (programs) decide to use the services the component offers, they should be free to choose commodities suitable for the consumption of the services.

Let the service users decide which datastore to use, which queue, etc. Programmers must never dictate a custom stack to clients.

Services must be fully isolated and independent. In other words, services must remain autonomous.

Value of values

What is a value in the context of programming? The following attributes characterize a value:

  • No no identity
  • Ephemeral
  • Nameless
  • On the wire

Consider an example value of a service that returns the total monthly service charge. Suppose a customer receives $425.00 as a monthly service charge. What are the characteristics of the value $425.00?

  • It has an identity.
  • No name – it is just four hundred twenty-five dollars—no need for a separate name.
  • It is ephemeral – as the time progresses, the monthly charge keeps changing.
  • It is always sent on a wire and received by the client.

The ephemeral nature of values implies flow.

Systems are not place-oriented

A place-oriented product could be depicted as a ship being built in a shipyard.

Image by:

(Alex Bunardzic, CC BY-SA 4.0)

Systems are flow-oriented Image by:

(Alex Bunardzic, CC BY-SA 4.0)

For example, cars are built on a moving assembly line.

How do values flow in the system?

Values undergo transformations and are moved, routed, and recorded.

  • Transform
  • Move
  • Route
  • Record
  • Keep the above activities segregated

How do values move in the system?

  • Source => destination
  • Mover (producer) depends on identity/availability
  • Must decouple producers from consumers
  • Must remove dependency on identity
  • Must remove dependency on availability
  • Use queues Pub/sub

It is essential to avoid dependencies for values to flow effectively through the system. Brittle designs include processes that count on a certain service being found by its identity or requiring a certain service to be available. The only robust design that allows values to flow through the system is using queues to decouple dependencies. It is recommended to use the publish/subscribe queuing model.

Design services primarily for machines

Avoid designing services to be consumed by humans. Machines should never be expected to access services via operational interfaces. Build human operational interfaces only after you've built a machine-centric service.

Strive to build only simple services. Simple services are easily composable. When designing simple services, there is no danger of premature abstraction.

It is not possible to over-abstract a simple service.

Avoid turning a service into a monolith

Abstain from adding functionality and features (keep it super simple). Avoid at all costs turning a service into a stack. Allow service users to choose which commodities to use when consuming them. Let them decide which datastore to use, which queue, etc. Don't dictate your custom stack to clients.

System failure model is the only failure model

Next, acknowledge that system failures are guaranteed to happen! It is not the question of if but when and how often.

When do exceptions occur? Any time a runtime system doesn't know what to do, the result is an exception and a system failure.

Those failures are different from programming errors. The errors occur when a team makes mistakes while implementing the processing logic (developers call those errors "bugs").

Whenever a system fails, notice that it is partial and uncoordinated. It is improbable that the entire system would fail at once (almost impossible for such an event to happen).

Minimum requirements for reliable systems

At a minimum, a reliable system must possess the following capabilities:

  • Concurrency
  • Fault encapsulation
  • Fault detection
  • Fault identification
  • Hot code upgrade
  • Stable storage
  • Asynchronous message passing

I'll examine those attributes one by one.

Concurrency

For the system to be capable of handling two or more processes concurrently, it must be non-imperative. The system must never block the processing or apply the "pause" button on the process. Furthermore, the system must never depend on a shared mutable state.

In a concurrent system, everything is a process. Therefore, it is paramount that a reliable system must have a lightweight mechanism for creating parallel processes. It also must be capable of efficient context switching between processes and message passing.

Any process in a concurrent system must rely on fault detection primitives to be able to observe another process.

Fault encapsulation

Faults that occur in one process must not be able to damage/impair other processes in the system.

"The process achieves fault containment by sharing no state with other processes; its only contact with other processes is via messages carried by a kernel message system." - Jim Gray

Here is another useful quote from Jim Gray:

"As with hardware, the key to software fault-tolerance is to hierarchically decompose large systems into modules, each module being a unit of service and a unit of failure. A failure of a module does not propagate beyond the module."

To achieve fault tolerance, it is necessary to only write code that handles the normal case.

In case of a failure, the only recommended course of action is to let it crash! It is not a good practice to fix the failure and continue. A different process should handle any error (the escalation error handling model).

It is crucial to constantly ensure clean separation between error recovery code and normal case code. Doing so greatly simplifies the overall system design and system architecture.

Fault detection

A programming language must be able to detect exceptions both locally (in the process where the exception occurred) and remotely (seeing that an exception occurred in a non-local process).

A component is considered faulty once its behavior is no longer consistent with its specification. Error detection is an essential component of fault tolerance.

Try to keep tasks simple to increase the likelihood of success.

In the face of failure, administrators become more interested in protecting the system against damage than offering full service. The goal is to provide an acceptable level of service and become less ambitious when things start to fail.

Try to perform a task. If you cannot perform a task, try to perform a simpler task.

Fault identification

You should be able to identify why an exception occurred.

Hot code upgrade

The ability to change code as it is executing and without stopping the system.

Stable storage

Developers need a stable error log that will survive a crash. Store data in a manner that survives a system crash.

Asynchronous message passing

Asynchronous message passing should be the default choice for inter-service communication.

Well-behaved programs

A system should be composed of well-behaved programs. Such programs should be isomorphic to the specification. If the specification says something silly, then the program must faithfully reproduce any errors in the specification. If the specification doesn't say what to do, raise an exception!

Avoid guesswork—this is not the time to be creative.

"It is essential for security to be able to isolate mistrusting programs from one another, and to protect the host platform from such programs. Isolation is difficult in object-oriented systems because objects can easily become aliased (i.e., at least two or more objects hold a reference to an object)" -Ciaran Bryce

Tasks cannot directly share objects. The only clean way for tasks to communicate is to use a standard copying communication mechanism.

Wrap up

Applications run on systems and understanding how to properly program systems is a critical skill for developers. Systems include reliability and complexity that are best managed using a series of best practices. Some of these include:

  • Processes are units of fault encapsulation.
  • Strong isolation leads to autonomy.
  • Processes do what they are supposed to do or fail as soon as possible (fail fast).
  • Allowing components to crash and then restart leads to a simpler fault model and more reliable code. Failure, and the reason for failure, must be detectable by remote processes.
  • Processes share no state, but communicate by message passing.

Applications run on systems and understanding how to properly program systems is a critical skill for developers.

Image by:

Ray Smith

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 tips to avoid these common agile mistakes

Fri, 04/29/2022 - 15:00
5 tips to avoid these common agile mistakes Kelsea Zhang Fri, 04/29/2022 - 03:00 Register or Login to like Register or Login to like

Agile is a tried and true discipline used by software development teams worldwide with great success. In my previous article, I listed mistakes I've made in the past so you don't have to make them yourself.

My teams and I have used agile since I started in tech. It hasn't always been easy, and there's been a lot of learning along the way. Ideally, you never really stop learning, so here are five more agile mistakes you can learn from right now.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles 1. Mistake: It's up to employees to improve skills on their own time

Many IT companies do not pay attention to training employees. Many trainers say that they have worked hard to train employees only to have employees get poached by competitors. Should this happen, a company should reflect: Why can't I keep excellent talent?

On the surface, employees are a company's greatest asset, but enterprises regard them as components that can be replaced at any time. Software development is a design process that faces uncertainty, variability, and is not like moving bricks. Developers are not producers, but designers. The replacement cost is relatively high for the designer.

Solution: Invest as much time training talent as you invest in software development. Keeping your workforce trained achieves high morale, less turnover, and might prevent poaching.

2. Mistake: Agile is just a tool

Some companies simply define agile transformation as the use of platforms. Using Kanban and holding stand-up meetings is agile, using CI/CD tools is DevOps, buying an automated testing platform is automated testing.

For me, the most important part of agile is individuals and interactions over processes and tools. Processes and tools help you get things done faster and produce better results, but individuals and interactions are paramount. Claiming to be agile just by buying a bunch of tools is doing nothing.

Solution: Incorporate agile into individual interactions. Keep using the tools and processes, but keep the individual involved.

3. Mistake: Misusing code modules saves you from having to write more code

When a company grows to a certain size and has multiple product lines and business lines, it naturally finds that some businesses seem to be similar.

Sometimes, it appears that a problem with one product has already been solved in another, so it seems natural to "steal" code from one and retro-fit it into the other.

This sounds appealing at first. But sometimes, each product is too nuanced for a one-size-fits-all code dump.

Trying to force code to fix two different problems, and to grow along with each product as development continues, can be problematic.

Solution: To ensure that you can successfully repurpose code, write modular code designed for flexibility.

4. Mistake: Maintain strict division of functions within your team

A team is divided into several functions: development, testing, back-end development, database, operation and maintenance, architecture design, security, and so on. The result is that there are multiple handovers in the process from requirements to delivery. As you may know, a handover leads to waiting. And waiting is a kind of waste.

To maintain team agility, you must reduce handovers. In addition, a strict division of functions leads to a serious imbalance in workload, which leads to bottlenecks.

Solution: Teams should have overlapping abilities to reduce time spent waiting on another group in the pipeline.

5. Mistake: If an employee doesn't look busy, they aren't producing results

Personnel in software development are often structured in a matrix or cartesian grid configuration. The horizontal axis is the product or project, and the vertical axis is the functional team.

Many people are shared across different products and projects, which often means that an employee becomes over-saturated with work. It's hard to spot, because the employee is used in multiple products, so they have multiple to-do lists. Each list has priorities that are invisible to the other lists.

As a result, they must constantly balance priorities, leading to a loss of efficiency. And because they have multiple projects to deal with, their lack of focused effort causes delays for other people. Therefore, personnel re-use is actually the exchange of reduced business response for personnel cost efficiency.

Solution: If you want a streamlined polished product, do not stretch your workforce too thin.

Be agile

I've given you five tips on some mistakes to avoid. Don't worry, though, there are still plenty of mistakes to make! Take agile to your organization and don't be afraid of enduring a few mistakes for the benefit of making your teams better.

Invest time in training, incorporate agile into individual interactions, and write modular code designed for flexibility.

Image by:

opensource.com

Agile DevOps What to read next 5 agile mistakes I've made and how to solve them This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create a blog post series with navigation in Jekyll

Thu, 04/28/2022 - 15:00
Create a blog post series with navigation in Jekyll Ayush Sharma Thu, 04/28/2022 - 03:00 Up Register or Login to like.

Blogging about individual self-contained ideas is great. However, some ideas require a more structured approach. Combining simple concepts into one big whole is a wonderful journey for both the writer and the reader, so I wanted to add a series feature to my Jekyll blog. As you may have guessed already, Jekyll's high degree of customization makes this a breeze.

Goal

I want to achieve the following goals:

  1. Each article should list the other articles in the same series.
  2. To simplify content discovery, the home page should display all series in a category.
  3. Moving articles into different series should be easy since they may evolve over time.
Step 1: Add series metadata to posts

Given Jekyll's high customizability, there are several ways to handle a series. I can leverage Jekyll variables in the config to keep a series list, use collections, or define a Liquid list somewhere in a global template and iterate over it.

The cleanest way is to list the series and the posts contained in that series. For example, for all the posts in the Jekyll series, I've added the following two variables in the post front matter:

is_series: true
series_title: "Jekyll"

The first variable, is_series, is a simple boolean which says whether this post is part of a series. Booleans work great with Liquid filters and allow me to filter only those posts which are part of a series. This comes in handy later on when I'm trying to list all the series in one go.

The second variable, series_title, is the title of this series. In this case, it is Jekyll. It's important that posts in the same series contain the same title. I'll use this title to match posts to a series. If it contains extra spaces or special characters, it won't match the series.

You can view the source code here.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Step 2: Add links to posts

With the series defined, I now need to show other articles in the series. If I see a post in the Jekyll series, there should be a list of other articles in the same series. A series won't make sense without this essential navigation.

My blog uses the posts layout to display posts. To show other posts in the same series as the currently viewed post, I use the code below:

>
{% if page.is_series == true %}
"text-success p-3 pb-0">{{ page.series_title | upcase }} series>
{% assign posts = site.posts | where: "is_series", true | where: "series_title", page.series_title | sort: 'date' %}
 
{% for post in posts %}
        {% if post.title == page.title %}
 

"nav-link bullet-pointer mb-0">{{ post.title }}

>
       {% else %}
 "nav-link bullet-hash" href="{{ post.url }}">{{ post.title }}>
       {% endif %}
{% endfor %}

{% endif %}{% endraw %}

The logic above is as follows:

  1. Check if the is_series boolean of the current page is true, meaning the post is part of a series.
  2. Fetch posts where is_series is true and series_title is the current series_title. Sort these in ascending date order.
  3. Display links to other posts in the series or show a non-clickable span if the list item is the current post.

I've stripped some HTML out for clarity, but you can view the complete source code here.

Step 3: Add links to each series to the home page

I now have the post pages showing links to other posts in the same series. Next, I want to add a navigation option to all series under a category on my home page.

For example, the Technology section should show all series in the Technology series on the home page. The same for Life Stuff, Video Games, and META categories. This makes it easier for users to find and read a complete series.

>
{% assign series = "" | split: "," %}
{% assign series_post = "" | split: "," %}
{% assign posts = site.posts | where:"Category", cat.title | where: "is_series",true | sort: 'date' %}

{% for post in posts %}
{% unless series contains post.series_title %}
{% assign series = series | push: post.series_title %}
{% assign series_post = series_post | push: post %}
{% endunless %}
{% endfor %}

{% if series.size > 0 %}
"row m-1 row-cols-1 row-cols-md-4 g-3 align-items-center">
"col">
"h3 text-success">Article series →>
>
   {% for post in series_post %}
        {% include card-link.html url=post.url title=post.series_title %}
    {% endfor %}
>
{% endif %}
{% endfor %}{% endraw %}

To identify all series for a particular category, I use the code above, which accomplishes the following:

  1. Initializes two variables: one for series names and another for the first post of each series.
  2. Fetches all posts that have is_series set to true and belong to the current category.
  3. Adds the series_title to the series names array and the first post to the series post array.
  4. Displays the name of the series, which links to the first post in that series.

You can find the full source code here.

Why I love using Jekyll for blogging

Jekyll's high degree of customization is why I enjoy working with it so much. It's also why my blog's underlying Jekyll engine has survived redesigns and refactors. Jekyll makes it easy to add dynamic logic to your otherwise static website. And while my website remains static, the logic that renders it doesn't have to be.

You can make many improvements to what I've shown you today.

One improvement I'm thinking of is handling series post ordering. For example, the posts in a series are currently shown in ascending order of their publish date. I've published several posts belonging to a series at different times, so I can add a series_order key and use it to order articles by topic rather than by publish date. This is one of the many ways you can build your own series feature.

Happy coding :)

This article originally appeared on the author's blog and has been republished with permission.

Jekyll's high degree of customization makes it easy to add dynamic logic to your otherwise static website.

Image by:

Jonas Leupe on Unsplash

Web development Open Studio What to read next A practical guide to light and dark mode in Jekyll How I dynamically generate Jekyll config files This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why use Apache Druid for your open source analytics database

Thu, 04/28/2022 - 15:00
Why use Apache Druid for your open source analytics database David Wang Thu, 04/28/2022 - 03:00 Up Register or Login to like.

Analytics isn't just for internal stakeholders anymore. If you're building an analytics application for customers, you're probably wondering what the right database backend is for you.

Your natural instinct might be to use what you know, like PostgreSQL or MySQL. You might even think to extend a data warehouse beyond its core BI dashboards and reports. Analytics for external users is an important feature, though, so you need the right tool for the job.

The key to answering this comes down to user experience. Here are some key technical considerations for users of your external analytics apps.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Avoid delays with Apache Druid

The waiting game of processing queries in a queue can be annoying. The root cause of delays comes down to the amount of data you're analyzing, the processing power of the database, and the number of users and API calls, along with the ability for the database to keep up with the application.

There are a few ways to build an interactive data experience with any generic Online Analytical Processing (OLAP) database when there's a lot of data, but they come at a cost. Pre-computing queries makes architecture very expensive and rigid. Aggregating the data first can minimize insight. Limiting the data analyzed to only recent events doesn't give your users the complete picture.

The "no compromise" answer is an optimized architecture and data format built for interactivity at scale, which is precisely what Apache Druid, a real-time database designed to power modern analytics applications, provides.

  • First, Druid has a unique distributed and elastic architecture that pre-fetches data from a shared data layer into a near-infinite cluster of data servers. This architecture enables faster performance than a decoupled query engine like a cloud data warehouse because there's no data to move and more scalability than a scale-up database like PostgreSQL and MySQL.
  • Second, Druid employs automatic (sometimes called "automagic") multi-level indexing built right into the data format to drive more queries per core. This is beyond the typical OLAP columnar format with the addition of a global index, data dictionary, and bitmap index. This maximizes CPU cycles for faster crunching.
High Availability can't be a "nice to have"

If you and your dev team build a backend for internal reporting, does it really matter if it goes down for a few minutes or even longer? Not really. That's why there's always been tolerance for unplanned downtime and maintenance windows in classical OLAP databases and data warehouses.

But now your team is building an external analytics application for customers. They notice outages, and it can impact customer satisfaction, revenue, and definitely your weekend. It's why resiliency, both high availability and data durability, needs to be a top consideration in the database for external analytics applications.

Rethinking resiliency requires thinking about the design criteria. Can you protect from a node or a cluster-wide failure? How bad would it be to lose data, and what work is involved to protect your app and your data?

Servers fail. The default way to build resiliency is to replicate nodes and remember to make backups. But if you're building apps for customers, the sensitivity to data loss is much higher. The occasional backup is just not going to cut it.

The easiest answer is built right into Apache Druid's core architecture. Designed to withstand anything without losing data (even recent events), Apache Druid features a capable and simple approach to resiliency.

Druid implements High Availability (HA) and durability based on automatic, multi-level replication with shared data in object storage. It enables the HA properties you expect, and what you can think of as continuous backup to automatically protect and restore the latest state of the database even if you lose your entire cluster.

More users should be a good thing

The best applications have the most active users and engaging experience, and for those reasons architecting your back end for high concurrency is important. The last thing you want are frustrated customers because applications are getting hung up. Architecting for internal reporting is different because the concurrent user count is much smaller and finite. The reality is that the database you use for internal reporting probably just isn't the right fit for highly-concurrent applications.

Architecting a database for high concurrency comes down to striking the right balance between CPU usage, scalability, and cost. The default answer for addressing concurrency is to throw more hardware at it. Logic says that if you increase the number of CPUs, you'll be able to run more queries. While true, this can also be a costly approach.

A better approach is to look at a database like Apache Druid with an optimized storage and query engine that drives down CPU usage. The operative word is "optimized." A database shouldn't read data that it doesn't have to. Use something that lets your infrastructure serve more queries in the same time span.

Saving money is a big reason why developers turn to Apache Druid for their external analytics applications. Apache Druid has a highly optimized data format that uses a combination of multi-level indexing, borrowed from the search engine world, along with data reduction algorithms to minimize the amount of processing required.

The net result is that Apache Druid delivers far more efficient processing than anything else out there. It can support from tens to thousands of queries per second at Terabyte or even Petabyte scale.

Build what you need today but future-proof it

Your external analytics applications are critical for your users. It's important to build the right data architecture.

The last thing you want is to start with the wrong database, and then deal with the headaches as you scale. Thankfully, Apache Druid can start small and easily scale to support any app imaginable. Apache Druid has excellent documentation, and of course it's open source, so you can try it and get up to speed quickly.

Your external analytics applications are critical for your users. It's important to build the right data architecture.

Image by:

Opensource.com

Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I grew my product management career with open source

Wed, 04/27/2022 - 15:00
How I grew my product management career with open source Shebuel Inyang Wed, 04/27/2022 - 03:00 Up Register or Login to like.

I'm a curious person, and I like to explore many fields in the technology industry, from visual design, programming, and product management. I am also drawn to open source ideas. So I'm excited to share with you how I, as a product manager (PM), have used open source to build my career. I believe my experiences can help others who are interested in product management.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews What is open source software?

In simple terms, open source software is software with source code that anyone can inspect, modify, enhance, and share. Opensource.com has documented a detailed and comprehensive article to help you understand what open source is. 

My discovery of open source started in the early phase of my career as a visual designer. I was curious to know what it meant and how to be a part of it and that led me to reach out to a few experienced open source contributors and advocates. Though I didn't contribute at the time, I acquired knowledge of the community which helped me when I made the decision to start contributing. 

How to break into product management

It might seem that breaking into product management is difficult, that you must put your boxing gloves on, come out fighting and force your way in. And yet, I've heard from other product managers that it was actually easier to break into compared to writing or debugging blocks of code, or pushing pixels to generate complex wireframes for product design.

Our journeys and approaches are different, so it's safe to say that the road to becoming a product manager can often be long and unpredictable. With the increasing level of competition in the job market, getting a role as an entry-level product manager can be difficult. Recruiters often require 2 to 3 years of experience to join a product team. You might ask, "How am I expected to get the experience?"

Here's a quick look at the four strategies for directing your career toward product management:

  1. Internal transition at a large organization that might require your manager to advocate for you as a good fit to transition within the company. You must have proof that you have transferable skills. This is generally considered the quickest route to product management experience.

  2. Junior PM roles at large organizations. It's common to go through an organization to get an internship, or to join an associate product management program that employs a junior PM.

  3. You can also try to get into product management by joining a startup.

  4. You can start a side project of your own to break into product management.

Without hands-on experience, it's difficult to become a product manager. As open source product manager David Ryan stated, "Few people are taking advantage of what is possibly the most under-utilized path to practical product management experience."

What is this path?

Open source is the answer

An open source project needs more than just code to be successful. This ranges from a strategy for the project, user research, and linking the strategy to daily work. These are all activities that a product manager should be actively involved in. But how much of the product management discipline is the responsibility of a first-time product manager?

Susana Videira Lopes stated in one of her articles that the "essence of getting an entry-level product role is to introduce you to the product management discipline in a way that builds up your confidence, while at the same time delivering value for the organization as early as possible."

How can an entry-level product manager get involved with an open source project, and deliver value?

Simple answer: Ask Questions

Here are some questions you can ask:

  • What problem or opportunity is being explored?

  • How is the solution being framed to tackle this problem?

  • What metrics are used to determine whether the project is successful?

  • Who are the people this solution serves?

  • How are they being informed about it?

  • How does the solution fit with both the immediate and wider ecosystem?

  • Where is the documentation being maintained on the project?

  • Do project maintainers understand accessibility requirements? Are they being met?

You've acquired skills as a product manager. Use them to help you express these thoughtful questions, and invite the team to consider them. The team can select the ones that resonate for the developers and the community, and prioritize what's most important.

These questions help you build user personas, a customer journey map, lean canvas, and more. This kind of experience goes a long way towards developing career potential.

My experience at OpenUnited

OpenUnited is a platform that connects digital talent and work in a unique way. We work with contributors to help them prove specific skills by working on high quality open source products. Once their work is verified, these talented contributors are eligible to work for companies on paid tasks.

OpenUnited is an open source platform that onboards contributors of all kinds—product managers, developers, designers, business analysts, and others. It helps them improve their skills and provides them with a long term source of high-quality paying work.

Farbod Saraf, a senior product manager at Miro, onboarded me on a platform he created with a partner. I joined the project and learned about contributing to OpenUnited. I also learned about other projects that could help me grow in my product management career, and made my first contribution. It was a good experience because I got to start working quickly on bits of the product, to improve the experience of other users on the platform. My mentor Farbod made it easier by making himself available to provide any needed help while I contributed to the project.

Everything you contribute to an open source project becomes a powerful public record of your development as a product manager. I strongly recommend the OpenUnited platform to anyone who wants to break into product management with open source.

How do you find open source projects?

Many people believe that contributing to open source is best left to developers because they find it difficult to search for and get open source projects they can comfortably contribute to.

As a first-time product manager, there are several ways to find open source projects to contribute to. Here's a list of some:

  • Speak up in product manager communities such as Mind The Product and Product School.

  • Go to local meetups and open source conferences like Open Source Community Africa Festival to connect with open source project creators and maintainers.

  • Engage with product managers working at larger open source companies such as GitLab or Mozilla. They may be able to refer you to open source projects where your skills and contribution could be beneficial.

  • Investigate open source advocates and DevRel teams at open source companies to get recommendations of open projects an entry-level product manager can contribute to.

  • Look to open source companies on AngelList or popular open source products on Product Hunt. These are great places to consider in your search for open products to contribute to.

What next?

Ruth Ikegah, a great source of inspiration for me, wrote an article for beginners in open source. In her article, she gave some tips to consider as you embark on contributing to open source.

Before joining and contributing, do some research on the project, community, or organization, and ask questions. When you finally decide to join the community, try to be active by introducing yourself and stating areas where you can help the project.

Of course, open source isn't just a stepping stone for your career. It's a platform in itself, and it needs great product managers. Get involved, contribute to the community, and help it help you hone your skills.

Gaining experience in open source helped me create a successful career path in product management.

Image by:

Opensource.com

Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A practical guide to light and dark mode in Jekyll

Wed, 04/27/2022 - 15:00
A practical guide to light and dark mode in Jekyll Ayush Sharma Wed, 04/27/2022 - 03:00 Up Register or Login to like.

Adding a light and dark mode to my side project www.fediverse.to was a fun journey. I especially loved how intuitive the entire process was. The prefers-color-scheme CSS property contains the user's color scheme—light or dark. I then define SASS or CSS styles for both modes, and the browser applies the style the user wants. That's it! The seamless flow from operating system to browser to website is a huge win for users and developers.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

After tinkering with www.fediverse.to I decided to add light and dark modes to this website as well. I began with some internet research on how to best approach this. This GitHub thread shows the current progress of the feature. And this in-depth POC demonstrates how challenging the process can be.

The challenge

The biggest challenge is that sometimes SASS and CSS don't play well with each other.

Let me explain.

From my earlier post on light and dark themes, to create both styles I needed to define CSS like this:

/* Light mode */
:root {
   --body-bg: #FFFFFF;
   --body-color: #000000;
}

/* Dark mode */
@media (prefers-color-scheme: dark) {
   :root {
       --body-bg: #000000;
       --body-color: #FFFFFF;
   }
}

This is simple enough. With the styles defined, I use var(--body-bg) and var(--body-color) in our CSS. The colors to switch based on the value of prefers-color-scheme.

Bootstrap 5 uses Sass to define color values. My website's color scheme in _variables.scss looks like this:

// User-defined colors
$my-link-color: #FFCCBB !default;
$my-text-color: #E2E8E4 !default;
$my-bg-color: #303C6C;

The solution seems obvious now, right? I can combine prefers-color-scheme with the variables above, and boom!

// User-defined colors
:root {
 --my-link-color: #FFCCBB;
 --my-text-color: #E2E8E4;
 --my-bg-color: #303C6C;
}

/* Dark mode */
@media (prefers-color-scheme: dark) {
 :root {
 --my-link-color: #FF0000;
 --my-text-color: #FFFFFF;
 --my-bg-color: #000000;
  }
}

Additionally, I need to replace the $ values with their -- variants in _variables.scss. After making the change and running jekyll build, I get the following:

Conversion error: Jekyll::Converters::Scss encountered an error while converting 'css/main.scss':
                    Error: argument `$color2` of `mix($color1, $color2, $weight: 50%)` must be a color on line 161:11 of _sass/_functions.scss, in function `mix` from line 161:11 of _sass/_functions.scss, in function `shade-color` from line 166:27 of _sass/_functions.scss, in function `if` from line 166:11 of _sass/_functions.scss, in function `shift-color` from line 309:43 of _sass/_variables.scss from line 11:9 of _sass/bootstrap.scss from line 1:9 of stdin >> @return mix(black, $color, $weight); ----------^
             Error: Error: argument `$color2` of `mix($color1, $color2, $weight: 50%)` must be a color on line 161:11 of _sass/_functions.scss, in function `mix` from line 161:11 of _sass/_functions.scss, in function `shade-color` from line 166:27 of _sass/_functions.scss, in function `if` from line 166:11 of _sass/_functions.scss, in function `shift-color` from line 309:43 of _sass/_variables.scss from line 11:9 of _sass/bootstrap.scss from line 1:9 of stdin >> @return mix(black, $color, $weight); ----------^
             Error: Run jekyll build --trace for more information.

The error means that the Bootstrap mixins expect color values to be, well, color values, and not CSS variables. From here, dig down into the Bootstrap code to rewrite the mixin. But I must rewrite most of Bootstrap to get this to work. This page describes most of the options available at this point. But I was able to make do with a simpler approach.

Since I don't use the entire suite of Bootstrap features, I was able to add light and dark mode with a combination of prefers-color-scheme, some CSS overrides, and a little bit of code duplication.

Step 1: Separate presentation from structure

Before applying the new styles to handle light and dark modes, I performed some clean-up on the HTML and CSS.

The first step is ensuring that all the presentation layer stuff is in the CSS and not the HTML. The presentation markup (CSS) should always stay separate from the page structure (HTML). But a website's source code can get messy with time. You can skip this step if your color classes are already separated into CSS.

I found my HTML code peppered with Bootstrap color classes. Certain div and footer tag used text-light, text-dark, bg-light, and bg-dark within the HTML. Since handling the light and dark theme relies on CSS, the color classes had to go. So I moved them all from the HTML into my custom SASS file.

I left the contextual color classes (bg-primary, bg-warning, text-muted, etc.) as-is. The colors I've picked for my light and dark themes would not interfere with them. Make sure your theme colors work well with contextual colors. Otherwise, you should move them into the CSS as well.

So far, I've written 100+ articles on this site. So I had to scan all my posts under the _posts/ directory hunting down color classes. Like the step above, make sure to move all color classes into the CSS. Don't forget to check the Jekyll collections and pages as well.

Step 2: Consolidate styles wherever possible

Consolidating and reusing styling elements ensures you have less to worry about. My Projects and Featured Writing sections on the home page displayed card-like layouts. These were using custom CSS styling of their own. I restyled them to match the article links and now I have less to worry about.

There were several other elements using styles of their own. Instead of restyling them, I chose to remove them.

The footer, for example, used its own background color. This would have required two different colors for light and dark themes. I chose to remove the background from the footer to simplify the migration. The footer now takes the color of the background.

If your website uses too many styles, it might be prudent to remove them for the migration. After the move to light/dark themes is complete, you can add them back.

The goal is to keep the migration simple and add new styles later if required.

Step 3: Add the light and dark color schemes

With the clean-up complete, I can now focus on adding the styling elements for light and dark themes. I define the new color styles and apply them to the HTML elements. I chose to start with the following:

  1. --body-bg for the background color.
  2. --body-color for the main body/text color.
  3. --body-link-color for the links.
  4. --card-bg for the Bootstrap Card background colors.
:root {
 --body-bg: #EEE2DC;
 --body-color: #AC3B61;
 --body-link-color: #AC3B61;
 --card-bg: #EDC7B7;
}

/* Dark mode */
@media (prefers-color-scheme: dark) {
 :root {
 --body-bg: #303C6C;
 --body-color: #E2E8E4;
 --body-link-color: #FFCCBB;
 --card-bg: #212529;
  }
}

With the colors defined, I changed the CSS to use the new colors. For example, the body element now looks like this:

body {
 background-color: var(--body-bg);
 color: var(--body-color) !important;
}

You can view the rest of the CSS changes on GitLab.

You can override Bootstrap 5 defaults if it's compiled with your Jekyll source and not from the CDN. This might make sense to simplify the custom styling you need to handle. For example, turning off link decoration made life a little easier for me.

$link-hover-decoration: none !default;Step 4: The Navbar Toggler

Last but not least: The navbar toggler. In Bootstrap 5, navbar-light and navbar-dark control the color of the toggler. These are defined in the main nav element and .navbar. Since I am not hard-coding color classes in the HTML anymore, I need to duplicate the CSS. I extended the default Sass and added my theme colors.

.navbar-toggler {
  @extend .navbar-toggler;
  color: var(--text-color);
  border-color: var(--text-color);
}

.navbar-toggler-icon {
  @extend .navbar-toggler-icon;
  background-image: escape-svg(url("data:image/svg+xml,"));
}

The code above is the default Bootstrap 5 toggler CSS code, with some minor changes. One thing to note here: For the toggler icon, I hardcoded stroke= #000000 since black works with my theme colors. You may need to be more creative about picking colors schemes that work well across the board.

Image by:

(Ayush Sharma, CC BY-SA 4.0)

And that's about it! The light and dark modes now work as expected!

Image by:

(Ayush Sharma, CC BY-SA 4.0)

Image by:

(Ayush Sharma, CC BY-SA 4.0)

Image by:

(Ayush Sharma, CC BY-SA 4.0)

Image by:

(Ayush Sharma, CC BY-SA 4.0)

Wrap up

Bootstrap 5 is complex, to say the least. There is a lot to think about when overriding it with your custom styling. Providing light and dark variants for every Bootstrap 5 component is difficult, but it's possible if you don't have too many components to deal with.

By ensuring that the markup stays in Sass/CSS, reusing styles, and overriding some Bootstrap 5 defaults, it's possible to achieve light and dark modes. It's not a comprehensive approach, but it is practical and serviceable until Bootstrap 5 decides to provide this feature out of the box.

I hope this gives you more practical ideas on how to add light and dark themes to your own website. If you find a better way to use your own CSS magic, don't forget to share it with the community.

Happy coding :)

This article originally appeared on the author's blog and has been republished with permission.

By ensuring that the markup stays in Sass/CSS, reusing styles, and overriding some Bootstrap 5 defaults, it's possible to achieve light and dark modes.

Image by:

Opensource.com

Web development Programming Accessibility What to read next A simple CSS trick for dark mode This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How open source and cloud-native technologies are modernizing API strategy

Tue, 04/26/2022 - 15:00
How open source and cloud-native technologies are modernizing API strategy Javier Perez Tue, 04/26/2022 - 03:00 Up Register or Login to like.

I recently had the opportunity to speak at different events on the topic of API strategy for the latest open source software and cloud-native technologies, and these were good sessions that received positive feedback. In an unusual move for me, on this occasion, I put together the slides first and then the article afterward. The good news is that with this approach, I benefited from previous discussions and feedback before I started writing. What makes this topic unique is that it’s covered not from the usual API strategy talking points, but rather from the perspective of discussing the latest technologies and how the growth of open source software and cloud-native applications are shaping API strategy.

I'll start by discussing innovation. All the latest software innovations are either open source software or based on open source software. Augmented reality, virtual reality, autonomous cars, AI, machine learning (ML), deep learning (DL), blockchain, and more, are technologies that are built with open source software that use and integrate with millions of APIs.

Software development today involves the creation and consumption of APIs. Everything is connected with APIs, and, in some organizations, there’s even API sprawl, which refers to the wide creation of APIs without control or standardization.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects Technology stacks and cloud-native applications

In modern software development, there is the concept of stacks. Developers and organizations have so many options that they can pick and choose a combination of technologies to create their own stack and then train or hire what are known as full-stack developers to work on those stacks. An example of a stack includes, for the most part, open source software such as Linux, a programming language, databases, streaming technology, runtimes, and DevOps tooling, all using and integrating with APIs.

From technology stacks, there are cloud-native applications which, refer to container-based applications. Today, there are many cloud-native options across all technologies; the cloud-native cloud computing foundation landscape is a sample of the available cloud-native ecosystem.

When organizations move from applications in a handful of containers to applications in dozens or even hundreds of containers, they need help managing and orchestrating all that infrastructure. Here is where Kubernetes comes into play. Kubernetes has become one of the most popular open source projects of our time, it has become the defacto infrastructure for cloud-native applications, and it has led to the creation of a new and growing ecosystem of Kubernetes operators; most popular software has now its own operator to make it easier to create, configure, and manage in Kubernetes environments, and, of course, operators integrate with Kubernetes APIs. Many available data technologies now have Kubernetes operators to facilitate and automate the use of stateful applications that integrate with Kubernetes APIs.

What is the API management layer?

A cloud-native environment also has its stack, cloud infrastructure, operating system, container orchestration, containers operators, application code, and APIs. All of this supports a software solution that integrates and exposes data to mobile devices, web applications, or other services, including IoT devices. Regardless of the combination of technologies, everything should be protected with API management platform functionality. The API management platform is the layer on top of the cloud-native applications that must be protected as data and APIs are exposed outside organizations’ networks.

And, talking about technology architectures, it’s highly important that the API management platform has flexible deployment options. The strategy and design should always include portability, the ability to move and deploy on different architectures (e.g., PaaS, on-premises, hybrid cloud, public cloud, or multi-cloud architectures).

[ Try API management for developers: Red Hat OpenShift API Management ]

3 API strategies to consider for cloud-native technologies

To design API strategy for the latest technologies, there are multiple options that can be summarized in three major areas. First, is a modernization strategy, from breaking monolithic applications into services, to go cloud-native and, of course, to integrate with mission-critical applications in mainframes. For this strategy, secured APIs are built and maintained. A second area to design an API strategy is what is known as headless architecture, the concept of adding features and functionality to APIs first and then optionally providing that functionality to the user interface. A granular architecture designed with microservices, or entirely based on APIs to facilitate integration and automation. The third API strategy area is to focus on is new technologies, from creating API ecosystems to attract customers and partners who contribute and consume public APIs, to selecting technology stacks and integrating them with new technologies, such as AI, serverless computing, and edge computing. Above all, every API strategy must include API management and a security mindset.

API management platforms should include the full lifecycle functionality for API design, testing, and security. Additional features, such as analytics, business intelligence, and an API portal, allow organizations to leverage DevOps and full lifecycle management for the development, testing, publishing, and consumption of APIs.

A couple of other examples of today’s latest technologies and how the knowledge and use of them can be part of an API strategy include the following: The first is DevOps integration. There is a variety of commercial and open source options for DevOps automation. Key pieces include continuous integration and continuous delivery tooling. The other very relevant space is data and AI technologies, a growing space with thousands of options for every stage of the AI development lifecycle, from data collection and organization to data analysis and the creation and training of ML and DL models. The final step in the AI development lifecycle should include automated deployment and maintenance of those ML and DL models. All of these steps should be combined with full integration of the different technologies via APIs and for external integrations, including data sources, with the important layer of an API management platform.

Open source and the API management layer

In summary, with all these new technologies from open source stacks and DevOps tooling to AI, the common layer of protection and management is the API management layer. There should be a security-first API strategy driven by API management, and it’s important to remember that in this day and age, APIs are everywhere and that the modern technology stacks will be integrated via APIs with data technologies (databases and storage), DevOps, and AI leading the pack. Don’t forget to design and manage APIs with security in mind. Regardless of the selected API strategy for modernization, as a headless architecture, or based on new technology, the API strategy must go hand in hand with your technology choices and vision for the future.

[ Take the free online course: Deploying containerized applications ]

With new technologies from open source stacks and DevOps tooling to AI, the common layer of protection and management is the API management layer.

Image by:

Opensource.com

Cloud Containers DevOps Kubernetes What to read next 5 open source tools for developing on the cloud This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 agile mistakes I've made and how to solve them

Tue, 04/26/2022 - 15:00
5 agile mistakes I've made and how to solve them Kelsea Zhang Tue, 04/26/2022 - 03:00 Up Register or Login to like.

Agile used to have a stigma as being "only suitable for small teams and small project management." It is now a famous discipline used by software development teams worldwide with great success. But does agile really deliver value? Well, it depends on how you use it.

My teams and I have used agile since I started in tech. It hasn't always been easy, and there's been a lot of learning along the way. The best way to learn is to make mistakes, so to help you in your own agile journey, here are five agile mistakes I've made.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles 1. Mistake: Agile only happens in development teams

Here's what happens when you restrict agile to just your development team. Your business team writes requirements for a project, and that goes to the development team, with a deadline. In this case, the development team isn't directly responsible for business goals.

There's very little communication between teams, let alone negotiation. No one questions the demands made by the business team, or whether there's a better way to meet the same business goal.

This can be discouraging to development teams, too. When developers are only responsible for filling in the code to make the machine work, they're disconnected from the business.

The final product becomes a monster, lacking reasonable abstraction and design.

Solution: Spread agile through your organization. Let everyone benefit from it in whatever way that's appropriate for their department, but most importantly let it unify everyone's goals.

2. Mistake: Automated testing is too much work to setup

The role of automated testing, especially Test Driven Development (TDD), is often undervalued by the IT industry. In my opinion, automated testing is the cornerstone of maintainable and high-quality software, and is even more important than production code.

However, most teams today don't have the ability to automate testing, or have the ability but refuse it because of time constraints. Programmers lack the ability to continuously refactor bad code without the protection of automated testing.

This is because no one can predict whether changing a few lines of code will cause new bugs. Without continuous refactoring, you increase your technical debt, which reduces your responsiveness to the demands of your business units.

Manual testing is slow, and forces you to sacrifice quality, testing just the changed part (which can be difficult), or lengthening the regression testing time. If the test time is too long, you have to test in batches to reduce the number of tests performed.

Suddenly, you're not agile any more. You've converted to Waterfall.

Solution: The key to automated testing is to have developers run tests, instead of hiring more testers to write scripts. That's why tests (written by testers) run slowly and only slowly produce feedback to programmers.

What's needed to improve code quality is rapid feedback on the program. The earlier an automated test is written, and the faster it's run, the more conducive it is for programmers to get feedback in a timely manner.

The fastest way to write automated tests is TDD. Write tests before you write the production code. The fastest way to run automated tests is unit testing.

3. Mistake: As long as it works, you can ignore code quality

People often say, "We're running out of time, just finish it."

They don't care about quality. Many people think that quality can be sacrificed for efficiency. So you end up writing low-quality code because you do not have time for anything else. In addition, low-quality code doesn't result in high performance.

Unless your program is as simple as a few lines of code, low-quality code will hold you back as code complexity increases. Software is called "soft" because we expect it to be easy to change. Low-quality code becomes increasingly difficult to change because a small change can lead to thousands of new bugs.

Solution: The only way to improve code quality is to improve your skills. Most people can't write high-quality code in one sitting. That's why you need constant refactoring! (And you must implement automated testing to support constant refactoring).

4. Mistake: Employees should specialize in just one thing

It feels natural to divide personnel into specialized teams. One employee might belong to the Android group, another to the iOS group, another to the background group, and so on. The danger is that teams with frequent changes mean that specialization is difficult to sustain.

Solution: Many practices in agile are based on teams such as team velocity, retrospective improvement, and staff turnover. Agile practices revolve around teams and around people. Help your team members diversify, learn new skills, and share knowledge.

5. Mistake: Writing requirements takes too much time

As the saying goes "Garbage in Garbage out," and a formal software requirement is the "input" of software development. Good software cannot be produced without clear requirements.

In the tech industry, I have found that good product owners are more scarce than good programmers. After all, no matter how poorly a programmer writes code, it usually at least runs (or else it doesn't ship).

For most product managers, there is no standard to measure the efficacy of their product definitions and requirements. Here are a few of the issues I've seen over the years:

  • Some product owners are devoted to designing solutions while ignoring user value.

This results in a bunch of costly, but useless functions.

  • Some product managers can only tell big stories, and can't split requirements into small, manageable pieces, resulting in large delivery batches and reduced agility.

  • Some product owners have incomplete requirement analysis, resulting in bug after bug.

  • Sometimes product owners don't prioritize requirements, which leads to teams wasting a lot of time on low-value items.

Solution: Create clear, concise, and manageable requirements to help guide development.

Make mistakes

I've given you five tips on some mistakes to avoid. Don't worry, though, there are still plenty of mistakes to make! Take agile to your organization, don't be afraid of enduring a few mistakes for the benefit of making your teams better.

Once you've taken the inevitable missteps, you'll know what to do differently the next time around. Agility is designed to survive mistakes. That's one of its strengths: it can adapt. So get started with agile, be ready to adapt, and make better software!

Take agile to your organization, don't be afraid of enduring a few mistakes for the benefit of making your teams better.

Agile What to read next What do open source product teams do? This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

New open source tool catalogs African language resources

Mon, 04/25/2022 - 15:00
New open source tool catalogs African language resources Chris Emezue Mon, 04/25/2022 - 03:00 Up Register or Login to like.

The last few months have been full of activity at Lanfrica, and we are happy to announce that Lanfrica has been officially launched.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources What is Lanfrica?

Lanfrica aims to mitigate the difficulty encountered when seeking African language resources by creating a centralized, language-first catalog.

For instance, if you're looking for resources such as linguistic datasets or research papers in a particular African language, Lanfrica will point you to sources on the web with resources in the desired language. If those resources do not exist, we adopt a participatory approach by allowing you to contribute papers or datasets.

Image by:

(Chris Emezue, CC BY-SA 4.0)

At Lanfrica, we employ a language-focused approach. With 2,199 African languages accounted for, our language section boasts of all the African languages—yes, all of them, including the extinct ones! We have created algorithms that can tell, with much effectiveness, the African language(s) involved in a resource, enabling us to curate even works that do not explicitly specify the African languages they worked on (and there are many).

Lanfrica offers enormous potential for better discoverability and representation of African languages on the web. Lanfrica can provide useful statistics on the progress of African languages. As a simple illustration, the language filter section offers an immediate overview of the number of existing natural language processing (NLP) resources for each African language.

Image by:

(Chris Emezue, CC BY-SA 4.0)

From this search result, you can easily see that among South African languages, Afrikaans has 28 NLP resources, while Swati has just eight. Or, to take another example, the Gbe cluster languages of Benin have far fewer NLP resources than some of the South African languages.

Image by:

(Chris Emezue, CC BY-SA 4.0)

Such insight can lead to better allocation of funds and efforts towards bringing the more under-researched languages forward in NLP, thereby fostering the equal progress of African languages.

Lanfrica v1 is just the beginning. We have major updates coming up in the future:

  • We plan to enable our users to sign up and add to or edit the resources on Lanfrica.

  • Our current resources currently consist of NLP datasets. Next, we plan to work on publications in computational linguistics and linguistic publications. See the infographic above for all the types of resources planned for inclusion.

  • We are exploring various techniques to simplify the process through which relevant resources are identified and connected to Lanfrica.

For more updates as we move forward, become part of the Lanfrica community by joining our Slack or following us on Twitter.

This article originally appeared on the Lanfrica blog and is republished with permission.

Lanfrica enables research on any of the current and extinct languages from the African continent.

Image by:

Geralt. CC0.

Tools Accessibility What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Prevent Kubernetes misconfigurations during development with this open source tool

Mon, 04/25/2022 - 15:00
Prevent Kubernetes misconfigurations during development with this open source tool Noaa Barki Mon, 04/25/2022 - 03:00 Up Register or Login to like.

I'm a developer by nature, but I've been doing a lot of DevOps work lately, especially with Kubernetes. As part of my work, I've helped develop a tool called datree with the aim of preventing Kubernetes misconfiguration from reaching production. Ideally, it helps empower collaboration and fosters a DevOps culture in your organization for the benefit of people like me, who don't always think in DevOps.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles A common scenario

The following scenario demonstrates a problem faced by many tech companies:

  • At 3:46AM on a Friday, Bob wakes up to the sound of something falling onto his bedroom floor. It's his phone, showing 15 missed calls from work.
  • Apparently, Bob had forgotten to add a memory limit in a deployment, which caused a memory leak in one of the containers, which led all Kubernetes nodes to run out of memory. 
  • He's supremely embarrassed about this, especially because the DevOps team had put so much effort into educating developers like him about Kubernetes and the importance of a memory limit.

How could this happen? Well, imagine that Bob works at Unicorn Rentals. Like many companies, they started as a tiny founding team of two developers, a CEO, and a CTO. Things were slow at first, but eventually everybody wanted to rent a unicorn, and when that happened, the company couldn't afford production outages.

A series of accidents like the one that woke Bob up at 3:46AM led the company to realize that something had to change.

If that mirrors scenarios in your own organization, then it could be that something needs to change for you, too.

The problem: scaling security policies

To avoid uncomfortable development issues and significant bugs in production, you need to educate your developers. They need to know about Kubernetes, how it works, how to develop it, and what they can do with it.

You also need to define policies so that if a resource doesn't match certain specifications on time, it doesn't enter the cluster. But what happens when there are hundreds of repos? How are those policies managed at scale? How can procedures be monitored and reviewed?

Datree is an open source command-line solution that enables Kubernetes admins to create policies and best practices they want the team to follow.

Datree allows admins to: 

  • Enforce policy restrictions on development: Enforce restrictions before applying resources to the cluster.
  • Enable restrictions management: Flexible management of restrictions in a dedicated place across the entire organization empowers administrators to control their systems fully.
  • Educate about best practices: Liberate DevOps from the constant need to review, fence, and future-proof all possible pitfalls on all current and future use cases which are part of the self-deployment. 
Why Datree?

Datree aims to help admins gain maximum production stability with minimum time and effort by enforcing policies before misconfigured resources reach production. 

  • Education and best practices insurance: The CLI application simplifies Kubernetes deployment experience, so developers don't need to remember any rules governing development. DevOps developers are no longer forming a bottleneck. Datree's CLI application comes with Kubernetes best practices built-in, so there's no need to rely on human observation and memory. 
  • Enforcement on development: Developers are alerted early, as soon as a misconfiguration occurs in the PR. This way, they can catch mistakes before their code moves to production/collaborative environments.
  • DevOps culture: Datree provides a mechanism similar to other development tools like unit tests. This makes it easier for developers because they are already used to these tools. Testing is the most common activity that developers carry out. Using familiar tools can be a great foundation for cultivating a DevOps culture.
How Datree works

The datree command runs automatic checks on every resource that exists in a given path. These automatic checks include three main validation types: 

  1. YAML validation
  2. Kubernetes schema validation
  3. Kubernetes policies validations
$ datree test ~/.datree/k8s-demo.yaml >> File: .datree/k8s-demo.yaml
[V] YAML validation
[V] Kubernetes schema validation
[X] Policy check

X Ensure each container image has a pinned (tag) version [1 occurrence]
  - metadata.name: rss-site (kind: Deployment)
!! Incorrect value for key `image` - specify an image version to avoid unpleasant "version surprises" in the future

X Ensure each container has a configured memory limit [1 occurrence]
  - metadata.name: rss-site (kind: Deployment)
!! Missing property object 'limits.memory' - value should be within the accepted boundaries recommended by the organization

X Ensure workload has valid Label values [1 occurrence]
  - metadata.name: rss-site (kind: Deployment)
!!  Incorrect value for key(s) under 'labels - the vales syntax is not valid so the Kubernetes engine will not accept it

X Ensure each container has a configured liveness probe [1 occurrence]
 - metadata.name: rss-site (kind: Deployment)
!! Missing property object 'livenessProbe - add a properly configured livenessProbe to catch possible deadlocks

[...]

After the check is complete, Datree displays a detailed output of any violation or misconfiguration that it finds, which guides developers to fix the issue. You can run the command locally, but it's specially designed to run during continuous integration (CI) or even earlier as a pre-commit hook (yes, without losing any explanation for reasons behind the policy).

Along with the command-line application, Datree enables complete management of policies using the UI, like creating new customized policies, reviewing the full history of the invocations, and more.

Image by:

(Noaa Barki, CC BY-SA 4.0)

How I've embraced the DevOps mindset

As a front-end full stack developer, I got trained to think solely about code, and I have always found DevOps technologies and thought processes to be a mystery. But recently, I was challenged to develop a CLI application at Datree and began to understand the importance and functionality of DevOps.

My mantra is, "Our job as developers isn't about coding—it's about solving real-life problems." When I started working on datree, I had to understand more than just the real-life problem. I also had to know how it became a problem in the first place. Why do organizations adopt Kubernetes? What's the role of the DevOps engineer? And most of all, for whom am I developing my application?

Now I can honestly say that through developing datree, I entered the world of Kubernetes and learned that the best way to learn Kubernetes is by embracing DevOps culture. Developing the datree command has taught me the importance of understanding my user persona. More importantly, it helped me gain fundamental knowledge about the ecosystem of an application and understand the product and user journey.

Summary

When Kubernetes is adopted, the culture of your development environment changes. DevOps isn't something that happens overnight, especially in a large organization. This transition can be aided with technology that helps developers catch their own mistakes and learn from them in the future. 

With Datree, the gap between DevOps and developers has begun to shrink. Even diehard coders like me have started to take ownership of limitation policies. The code sent to production is of higher quality, saving time and preventing embarrassing mistakes.

Datree is an open source command that enables Kubernetes admins to create policies and best practices they want the team to follow.

Kubernetes DevOps Command line What to read next What Kubernetes taught me about development Implement governance on your Kubernetes cluster What you need to know about security policies This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Keep your Exif metadata private with this open source tool

Sat, 04/23/2022 - 15:00
Keep your Exif metadata private with this open source tool Don Watkins Sat, 04/23/2022 - 03:00 Up Register or Login to like.

These days, nearly everyone has a digital camera. Cameras are an integral part of smartphones and laptops. If you're interacting with consumer electronics, you probably have a digital camera available.

Accordingly, there are billions of digital images on the internet from various devices and sources. Each image from a digital camera has Exchangeable image file format (Exif) metadata embedded into it. Exif data provides information about where and when the picture was taken, the camera used to produce the image, the file size, MIME type, color space, and much more.

Each picture you take with a digital camera contains numerous tags which provide a great deal of information, some of which might ordinarily be considered confidential.

Major social media platforms maintain that they remove this metadata to protect users from cybercrime. That is not the case for folks who have their own blogs and wikis and are posting pictures of loved ones, family gatherings, and classrooms. A person could download an image from a site and gain access to damaging personal information stored in the metadata.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources View Exif data

How can you know what metadata is included in the images you share, and how can you remove it? Recently, I came across an open source project named ExifCleaner. ExifCleaner is a cross-platform open source tool that easily removes all Exif metadata from images, videos, PDFs, and other types of files.

Install ExifCleaner

ExifCleaner is released under the MIT license. It's easy to use and install.

Download and install the AppImage, deb, or rpm file on your Linux system.

For macOS and Windows, download the macOS installer or the Windows installer.

Use ExifCleaner

Once installed, launch the graphical application.

Image by:

(Don Watkins, CC BY-SA 4.0)

You can either drag and drop an image into the window or use Open from the File menu to load an image. You can load multiple images at once.

Once loaded, ExifCleaner clears all metadata instantly. There's no further action required, but there's also no confirmation or warning. Only open files in ExifCleaner that you want to scrub metadata from.

Image by:

(Don Watkins, CC BY-SA 4.0)

ExifCleaner works on dozens of file types, including JPG, 3G2, 3GP2, AAX, CR2, MOV, PDF, PNG, and many more.

Try ExifCleaner

ExifCleaner is available in twenty-four different languages. There is a large development community. If you are interested in contributing to the project's development, contact the team and check out the source code. Learn more about ExifCleaner at the official website.

ExifCleaner is a cross-platform open source tool that easily removes all Exif metadata from images, videos, PDFs, and other types of files.

Image by:

g4ll4is on Flickr. CC BY-SA 2.0

Security and privacy Open Studio What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 things to know about Drupal in 2022

Fri, 04/22/2022 - 15:00
3 things to know about Drupal in 2022 Shefali Shetty Fri, 04/22/2022 - 03:00 Up Register or Login to like.

A broad range of enterprises, including nonprofits, media and publishing, government agencies, education, and more, rely heavily on Drupal. But while Drupal is widely recognized as one of the most robust and flexible content management systems (CMS), it also has a reputation for being difficult to work with.

Research conducted at a 2019 DrupalCon suggested that while experienced developers felt empowered and loved working with Drupal, novice users found it challenging to learn and work with. The Drupal community recognized that there was a serious need to improve the ease of use right from the moment you install Drupal.

Since then, several strategic initiatives have been rolled out to make Drupal easier to use and to empower amateur users to build beautiful digital experiences.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Simplifying the out-of-the-box experience

Editorial teams, content creators, marketers, publishers, and tech advisors often use Drupal as an editorial platform. Many of them are beginner to intermediate users. Prioritizing their experience is one of the Drupal community’s top missions.

To get this ball rolling, Dries Buytaert, founder of Drupal, launched the "Easy-out-of-the-box" initiative at DrupalCon 2020. From the onset, the initiative aimed to provide an easy, intuitive, and modern out-of-the-box user experience.

It offers three benefits in one package:

  • Media library: A flexible and robust digital asset management tool that is easy to work with, even for novice users. Finding, adding, using, deleting, and reusing media files has never been easier. It offers an intuitive interface that is customizable and robust. Media has been a part of Drupal core since the release of Drupal 8.4. The goal now is to have it enabled by default. Work is currently in progress on enhancing Media's usability, design, and accessibility.

Image by:

Shefali Shetty, CC BY-SA 4.0

  • Layout Builder: A WYSIWYG-like experience for editors with easy-to-use page building capabilities. It offers powerful UI tools with intuitive drag-and-drop features that require little or no code to create and customize modern page layouts. Layout Builder has been a stable Drupal core module since Drupal 8.7. Like Media, Layout Builder is not enabled on installation of Drupal. Currently, the initiative team is working towards enabling site builders to layout the headers, footers, and sidebars of the pages as well.

Image by:

Shefali Shetty, CC BY-SA 4.0

  • Claro admin theme: This is a fresh, powerful, and accessible administration theme with a modern look and feel. Not only is it easy on the eyes—Claro also offers powerful and advanced visual elements and is compliant with the latest accessibility standards. Currently, Claro is in an experimental phase in Drupal core and is not stable. Before enabling Claro as the default admin theme, more work is needed to enhance its usability, accessibility, and design.

Image by:

Shefali Shetty, CC BY-SA 4.0

A new front-end theme: Olivero

Bartik, Drupal’s default front-end theme, has been around for more than 10 years. Drupal 8 saw a new release of Bartik that was responsive out of the box and had significant improvements in its structure, extensibility, and design. But ever-evolving web design trends called for a more advanced, modern, and impressive theme. Bartik's design, layout, and functionality feel outdated compared to Drupal's sophisticated backend.

The new front-end theme, Olivero, was named after Rachel Olivero (1982-2019), a Drupal community member and head of the National Federation of the Blind. Olivero is now a stable theme in Drupal 9.3 and will become the default front-end theme with the Drupal 10 release.

Some of the fantastic features of Olivero:

  • Modern design: The design elements have been built to stay relevant for years to come. The color palette gives the theme a shiny, light, and modern look. Elements like drop shadows and heavy colors are very minimally used. The typography used for body, header, and other UI elements are proportionate and resizable with the device. Buttons are intuitive and come in highly contrasting colors that are easy on the eyes. Collapsible first-level menus make it easy to access even through a lengthy page or on wider screens. Olivero offers tons of customizations in the theme settings to suit every user's needs.

  • Futuristic functionality: Olivero backs Drupal's out-of-the-box multilingual functionality and supports displaying right-to-left (RTL) languages such as Urdu, Arabic, or Hebrew. It is compatible with all the latest browsers without the need to customize code for each of them. The theme will support some of the most useful modules, including Media, Layout Builder, and more. It will also offer and support customizations to secondary navigation. Olivero uses PostCSS to compile CSS and for improved browser support, which also helps a lot when you need further theme customizations.

  • Accessibility: Accessibility has been one of the highest priorities while proposing this theme. Much work has gone into making Olivero compliant with the Web Content Accessibility Guidelines (WCAG) Level AA. The team is continuously working on enhancing the design to get through Drupal's rigorous accessibility gate. The color contrast ratio in Olivero's color palette used in the theme design, typography, forms, messages, and buttons enhances its accessibility.

Image by:

Shefali Shetty, CC BY-SA 4.0

Project Browser: A module marketplace

Browsing through over 40,000 Drupal modules is not an easy task. In its current setup, if you need to find and install a contributed module, you will need to step out of your Drupal site, head over to Drupal.org, search for the module, and then install it. Often, site builders require more advanced tech skills to install a module via the Composer (a dependency manager for PHP) on a command-line interface.

The Project Browser initiative, proposed by Dries during DrupalCon 2021, seeks to make it easier for site builders new to Drupal to browse and install modules with the click of a button. It will empower developers and site builders to discover and experiment with modules of their choice instantly. To ensure real-time data access, the component will connect to the Drupal.org API using a decoupled approach. The team is actively engaged in:

  • Building a marketplace-like browser within Drupal, so you don't have to leave your site looking for modules
  • Creating a powerful UI enabling a streamlined view of projects that is easy to filter and sort
  • Preparing a minimum viable product (MVP) to be shipped as a contributed feature for users to try out, eventually moving it to Drupal core

The initiative is in its early stages now, but a lot of work has happened already. The team aims to have a contributed module Project Browser ready in Drupal 10.

Image by:

Shefali Shetty, CC BY-SA 4.0

Final thoughts

I have penned my thoughts earlier on how Drupal has been adopting continuous innovation and implementing planned strategic initiatives, as promised. With ease of use and beginner-friendliness as some of Drupal's top priorities, the community is briskly marching towards building an easier, more beautiful, and more modern digital experience platform.

By the way, DrupalCons are a great place to meet, learn, and collaborate with Drupal community members and contribute to advancing the open source platform. If you're looking to connect, DrupalCon Portland 2022 is coming up soon (April 25-28) in Portland, OR, USA.

Several changes have made Drupal more accessible and easier to use.

Drupal What to read next What's new with Drupal in 2021? This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 open source tips to reduce waste in web design

Thu, 04/21/2022 - 15:00
5 open source tips to reduce waste in web design Tom Greenwood Thu, 04/21/2022 - 03:00 Up Register or Login to like.

I started my career in product design, when "product" meant a real thing that you could hold in your hand. When I moved into digital design 15 years ago, I was excited to design digital products that added value to people's lives without any environmental impact. They didn't waste energy, didn't have any wasteful packaging and didn't end up as waste in landfill sites at the end of their lives.

Therefore, I was surprised to later learn that digital products can be wasteful. In this article, I explore how applying a zero waste mindset to digital design and development can help you create an internet that's better for people and the planet.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Waste isn't normal

I think it's fair to say that even if we don't like it, most of us accept waste as a normal part of everyday life. However, waste is anything but normal. In nature no resource is wasted and everything has value. The type of waste that we now think of as being so normal is a relatively new concept.

By the 1980s, waste piling up in landfill sites was already a global problem. Daniel Knapp decided that something must be done. He came up with the concept of Total Recycling, in which nothing should ever go to a landfill or incineration. He coined the term Zero Waste as the goal and co-founded a salvaging operation called Urban Ore. It was a real world experiment to demonstrate how all types of waste could be diverted from landfills and reused in the community.

While Knapp's initiative had some success, the global waste problem kept growing and in the mid-2000s a growing number of individuals began to take things into their own hands, trying to live zero waste lifestyles. This concept was popularized by bloggers such as Bea Johnson and Lauren Singer who shared their experiences trying to live without waste and inspired others to follow their lead.

How does this apply to web design?

Several years ago, I embarked on some research to understand whether or not web products have an environmental impact. I was shocked by what I found. When taken as a whole, the internet produces more carbon emissions each year than the global aviation industry, thanks to the huge amount of electricity required to power data centers, telco networks, and billions of end user devices. Not to mention the fact that all of that equipment needs to be manufactured and maintained. The internet is not virtual at all, it is very much physical.

It turns out that despite their basic functionality and appearances, early websites were super efficient, with tiny file sizes and requiring hardly any computing power. As computers got more powerful and internet speeds increased, websites became increasingly bloated, eroding the benefits of advances in computer hardware. As a result, the modern web is no faster than it was 10 years ago, and is far more polluting.

In an article for National Geographic about people living zero waste lifestyles, journalist Stephen Leahy wrote that contrary to his prior assumption, "These are not wannabe hippies, but people embracing a modern minimalist lifestyle. They say it saves them money and time and enriches their lives."

What if we applied a zero waste mindset to digital design? Could it help us create a modern, minimalist web that saves people time, money, and enriches people's lives? I think it could.

1. Pictures are more than a thousand words

A picture tells a thousand words, but the truth is that a picture uses a lot more data than 1000 words of text, and in turn it uses a lot more energy to store, transmit, and render.

Research by NielsenNorman Group found that website visitors completely ignore images that are not relevant to the content, making generic stock photos on websites a literal waste of space and of data. It's better to use images mindfully and only include them in designs that truly add value.

Even if you are going to include photos in your designs, how you use them can often be wasteful. For example, there is roughly a square law when it comes to image dimensions and file size. If you double the width and height, you almost quadruple the file size. And that's assuming you've written the code to load the correct size of image rather than loading large image files and displaying them to appear small.

You can also find waste within the images, in the form of detail that doesn't need to be there. Removing detail by blurring out parts of an image, using photography with shallow depth of field, photographing objects on plain backgrounds, or using monochrome images are just a few ways to reduce image file sizes. If the detail isn't needed, then it's waste.

Even if you design images efficiently, there's still potential waste in the image files themselves. Using indexed color in your image editing application can strip out unnecessary data from image files with no visual loss of quality.

2. Choose your file format wisely

You can also use more efficient file formats. For example, WebP image files are typically 30% smaller than JPEG and AVIF image files are roughly half the size of JPEGs.

Vector graphics such as SVG can also be much more efficient alternatives to photography on websites. You can optimize your SVG files by stripping out unnecessary layers in the design files and simplifying vector paths. The size of an SVG file can be reduced as much as 97% simply by spending a few minutes cleaning up the design file.

3. Stop autoplaying video

Autoplay videos consume far more data and energy than other content types. New York Times journalist Brian Chen wrote an article about the scourge of autoplay videos on the web, stating that they "demand your attention while burning through your data plan and sucking up your batteries." They waste a user's data plan (and therefore their money), they waste energy, and they slow down web pages. Use video sparingly, and put a play button on it to allow users to opt in.

4. Zero waste fonts

System fonts might not be popular with designers, but they already exist on every user's device so they don't need to be loaded, making them truly zero waste. For example, a travel website might use a system font to deliver an efficient user experience for its users, many of whom may be abroad and using slow, expensive roaming data.

The font-family CSS property provides some generic family names you can use to designate fonts that are already installed on the host system:

  • serif
  • sans-serif
  • cursive
  • system-ui

If you do use web fonts, the easiest place to look for waste is to identify characters in the font file that your website doesn't use. For example, some fonts supply thousands of characters, yet the English language only needs about one hundred. There are a number of font subsetting tools available online that can take any font file and strip out characters not used in your target languages.

When selecting the font to use, a browser doesn't stop at the first font in your CSS list. Font selection is actually done for each character on the webpage under the assumption that when one font lacks a specific glyph, another font in your list might provide it. If you know you need a font for a set of special characters, add that font only after you've set the main font choice to a system font.

Just like images, you can save more data by using efficient file formats. WOFF2 font files can be about 30% smaller than WOFF files, and as much as 75% smaller than TTF files.

5. Find the waste in your code

The tool CSSstats.com allows you to visualize what is actually in your CSS files and see how often you duplicate the same styles. Seeing this waste can help you clean it up, and implementing a modular design language with repeatable styles can help you maintain clean, efficient CSS over the long term.

When choosing libraries, frameworks or tracking scripts, you should ask yourself whether they're really necessary, and whether smaller alternatives are available. For example, jQuery might only be 30kb, but it's possible to build an entire web page in less than 30kb. If you can avoid adding it, you should. Likewise, the basic Google Analytics tracking script is 17kb but an alternative like the open source Plausible analytics is less than 1kb and is designed to respect people's privacy.

Some programming languages are also more wasteful than others in terms of the energy efficiency with which they can perform tasks. JavaScript is seven times more energy efficient than PHP. You should keep this in mind when deciding what new languages to learn and what technologies to specify for future projects.

Reducing waste on the web is good for everyone

It's true that eliminating waste in our web projects requires a bit of extra attention to detail, but when you do so you can create web experiences that are not only better for the environment, but deliver faster, more accessible user experiences too. Who doesn't want that?

So perhaps you should ask yourself this question from Urban Ore: "If you're not for zero waste, how much waste are you for?"

Achieve zero waste web design with these open source tools and tips.

Image by:

Opensource.com

Web development Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux KDE receives first-ever eco-certification for Okular

Thu, 04/21/2022 - 15:00
Linux KDE receives first-ever eco-certification for Okular Seth Kenlon Thu, 04/21/2022 - 03:00 Up Register or Login to like.

The open source community KDE recently received the German Blue Angel (Blauer Engel) ecolabel for energy efficiency. The software, Okular, is a universal document viewer designed to work on multiple platforms with a wide variety of file formats.

As a longtime member of the KDE community and a happy Plasma Desktop user, I asked Joseph De Veaugh-Geiss of the KDE Eco group about the ways KDE and open source can help computing be eco-friendly.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

Q: KDE has announced that sustainability is a top priority. People don't typically consider software a factor in contributing to physical waste, so what does it mean for an application to be eco-friendly?

Joseph De Veaugh-Geiss: Software can produce waste in many ways. Software that reduces this waste is software that is more sustainable. User autonomy and transparency, the pillars of Free and Open Source Software, are factors that the Blauer Engel ecolabel recognizes as critical for sustainable software.

I can illustrate with some examples.

A computer may be rendered hardly usable, or not usable at all, due to inefficient software design, feature creep, and other forms of software bloat that users may not need or even want. Yet vendors force users to buy newer, more powerful hardware. When updates for a device, like a mobile phone or tablet, are discontinued, most people discard the device as e-waste because continued use would be a security risk. This e-waste can have huge environmental costs.

According to a report in Anthropocene Magazine, the production of a smartphone accounts for 85% to 95% of its annual carbon footprint due to the energy-intensive processes required to mine the metals. Giving users autonomy in how their software runs, what is installed or uninstalled, which devices are supported, and so on is critical for reducing hardware waste.

Q: I imagine the way software interacts with hardware can also be inefficient. Does KDE take this into consideration?

Joseph De Veaugh-Geiss: Software can waste energy, which in turn drives up electricity bills and drains the battery. For example, advertisements or tracking data transmitted in the background are common causes of excess energy use. Users are usually powerless to opt out of such background computations, and in many cases these wasteful processes have nothing to do with the primary functions of the software.

Consider a report from the German Environment Agency, which found that two text editors performing the same task had drastically different energy demands: To get identical end results, one text editor consumed 4 times the energy compared to the other!

In probably every country in the world, every student, official, and everyday user uses a text editor. If you increase software efficiency by 4 times for billions of users worldwide, the numbers quickly add up. Choosing the more energy-efficient text editor would mean nontrivial energy savings, but transparency about software's energy demands is necessary to make such choices.

KDE Eco views eco-friendliness in terms of a range of factors that reduce waste and increase sustainability. The Blue Angel award criteria for software, which is a focus of the Blauer Engel 4 FOSS (free/open source software) project, provides an excellent benchmark for evaluating the eco-friendliness of software.

Q: Is there a benefit to users for their software to be sustainable?

Joseph De Veaugh-Geiss: Both software that conserves energy by reducing unnecessary background processes and software that is more energy-efficient with identical results can lead to lower electricity bills, longer battery usage, extended hardware life, higher software responsiveness, and so on. And you can save money by continuing to use functioning hardware with up-to-date software.

Most important of all, using software that is sustainable may reduce the environmental impact of digitization and contribute to more responsible use of shared resources.

Q: When programming, what things can a developer keep in mind to make their code sustainable?

Joseph De Veaugh-Geiss: I'm not a coder, but measuring energy consumption is an important first step in achieving more sustainable software. Once the numbers are known, developers can drive down the code's energy demands on hardware. This is why KDE Eco is working on setting up a community measurement lab to make measuring energy consumption accessible to FOSS projects.

The SoftAWERE project from the Sustainable Digital Infrastructure Alliance, which KDE Eco has been collaborating with, is looking to make energy consumption measurements part of the CI/CD pipeline. These tools help developers make their code more sustainable.

Q: Have you had to make trade-offs when programming Okular to make it more sustainable? In other words, have you had to sacrifice quality or features for sustainability?

Joseph De Veaugh-Geiss: In terms of the Blue Angel ecolabel, with its emphasis on transparency in energy and resource consumption and user autonomy, Okular was already quite close to compliance.

Most of the work was in measuring the energy and hardware demands when using Okular and analyzing the results—carried out by researchers at Umwelt Campus Birkenfeld—as well as documentation of fulfillment of the award criteria. In some cases, we lacked documentation simply because we in the FOSS community may take many aspects of user autonomy for granted, such as freedom from advertising, uninstallability, or having continuous updates provided free of charge. In this respect, there was no sacrifice in quality or features of the software, and in some cases we now have better documentation after completing the application for eco-certification.

We will see what the future brings, however: In order to remain compliant, the energy demand of Okular must not increase more than 10% compared to the value at the time of application. It is possible this could require trade-offs at a future date. Or not!

Q: The Plasma Desktop isn't generally considered a lightweight desktop, especially when compared to something like LXQt. If an aging computer can't handle the full desktop, can I still benefit from K apps such as Okular?

Joseph De Veaugh-Geiss: Yes, I believe there is a benefit to using Okular and other KDE apps over less efficient alternatives regardless of the desktop.

Q: Why do you think Okular got the attention of the Blue Angel project instead of other KDE applications like Gwenview, Dolphin, Elisa, and so on?

Joseph De Veaugh-Geiss: Everybody needs a PDF and general document viewer! And Okular is multiplatform software, with downloads available for GNU/Linux, Plasma Mobile, Android, and Windows. This made Okular an attractive candidate for a Blue Angel application.

Please keep in mind, however, that we are working on certifying other KDE software in the near future. We already have energy consumption measurements for KMail and Krita, thanks to the work of the Umwelt Campus Birkenfeld, and we are preparing to measure Kate and GCompris in our coming community lab at KDAB (Klaralvdalens Datakonsult AB) Berlin. Moreover, we have begun reaching out to the wider FOSS community regarding measuring and improving energy efficiency and possible Blue Angel eco-certification.

Q: How important is open source to the idea of sustainable computing?

Joseph De Veaugh-Geiss: Free and open source software can promote transparency and give users control over the software they use, rather than companies or device manufacturers. This means users, and their communities, can directly influence the factors that contribute to sustainable software design, whether when using the software or developing it.

Q: What are your future plans for KDE Eco?

Joseph De Veaugh-Geiss: In the coming weeks, we will set up the first community lab at KDAB Berlin for measuring the energy consumption of Free Software. Once the lab is set up, we will have a measure-athon to measure Kate, GCompris, and other Free Software applications. We plan to publish the results, and over time we hope to push more and more developers, FOSS or otherwise, to be transparent about the energy demands of their software products.

With more software measured, we hope to attract developers to help us develop tools to make energy consumption measurements more accessible. For instance, there is a great data analysis tool—OSCAR (Open source Software Consumption Analysis in R)—but it will need maintenance. Perhaps there are other data analysis tools we could develop for this work. Moreover, our long-term vision for the lab is to have an upload portal where developers can upload their software and usage scenarios, and the entire measurement and data analysis process is automated.

We look forward to working with the FOSS community to make these kinds of toolsets a reality!

The open source document viewer is just one element of KDE's initiative to make software more sustainable.

Image by:

(Image by KDE.org)

Linux Science What to read next Why I love KDE for my Linux desktop This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Linux rescues slow computers (and the planet)

Thu, 04/21/2022 - 15:00
How Linux rescues slow computers (and the planet) David Both Thu, 04/21/2022 - 03:00 Up Register or Login to like.

Mint and Kasen, two of my grandkids, asked me to help them build gaming computers. I am ecstatic that they asked. This gives me a great opportunity to help them learn about technology while being a part of their lives. Both of those things make me happy. There are many ways to approach the ecological impact of computers.

Wait! That's quite a non-sequitur—right? Not really, and this article is all about that.

What happens to old computers?

What happens to old computers (and why) is a big part of this discussion. Start with the typical computer getting replaced after about five years of service. Why?

Online articles such as this one I found on CHRON, a publication aimed at small businesses suggest a three-to-five-year lifespan for computers. This is partly based on the alleged fact that computers slow down around that time in their life cycle. I find the pressure to get a newer, faster computer within that same time frame in this and other articles. Of course, much of that pressure comes from the computer and chip vendors who need to keep their income streams growing.

The United States Internal Revenue Service reinforces this five-year service life by specifying that time frame for full depreciation of computers.

Let's start with the myth of computer slowdowns. Computers don't slow down—ever. Computers always run at their designed clock speeds. Whether that is 2.8GHz or 4.5GHz, they will always run at that speed when busy. Of course, the clock speeds get intentionally reduced when the computer has little or nothing to do, saving power.

Computers don't slow down because they are old. Computers with Windows installed produce less legitimate work as they grow older because of the massive amount of malware, spyware, adware, and scareware they accumulate over time. Computer users have come to believe that this is normal, and they resign themselves to life with all of this junk dragging down the performance of their computers.

More Linux resources

Linux to the rescue

As a known computer geek among my friends and acquaintances, people sometimes gift me with their old computers. They no longer want them because they are slow, so they give them to me and ask me to wipe their hard drives before taking them to the electronics recycling center a few blocks from my house. I always suggest that their three-to-five-year-old computers are still good, but they seem intent on spending money rather than learning a new operating system.

I have several old computers gifted to me. One, in particular, a Dell Optiplex 755 with a 2.33 GHz Core 2 Duo processor and 8GB of RAM, is particularly interesting. Its BIOS is dated 2010, so it is around 12 years old. It is the oldest computer I have, and I keep it quite busy. I have had it for several years, and it never slows down because I use Linux on it—Fedora 35 right now.

If that is an exception, here are more. I built three computers for myself in 2012, ten years ago, and installed Fedora on all of them. They are all still running with no problems and as fast as they ever did.

There are no exceptions here, just normal operations for old computers on Linux.

Using Linux will at least double the usable lifetime of a computer and at no cost. This keeps those computers out of the landfill (at worst) and out of the recycling centers (at best) for an additional five-to-seven years or more.

So long as I can find replacement parts for these computers, I can keep them running and out of any disposal or recycling path. The problem with some computers is finding parts.

Non-standard hardware

Let's talk about non-standard hardware and some of the computers that you can buy from some well-known companies. As I mentioned above, one of my old computers is a Dell. Dell is a respectable company that has been around for a long time. I will never purchase a Dell desktop or tower computer, although I will take them as donations or gifts. I can install Linux, get rid of Windows, and make these old computers useful again. I use them in my home lab as test computers, among other things.

However, Dell uses some non-standard parts that you can't easily replace. When you can find parts (like power supplies and motherboards), they are not cheap. The reason is that those vendors create systems with non-standard power supplies and motherboards that only fit within their own non-standard cases. This is a strategy used to keep revenues up. If you can't find these parts on the open market, you must go to the original manufacturer and pay inflated, if not exorbitant, prices.

As one example, the Dell Optiplex I have uses a motherboard, case, and power supply that do not meet generally accepted standards for physical compatibility. In other words, a Dell motherboard or power supply would not fit in a standard case that I can purchase at the local computer store or Amazon. Those parts would not fit in a gaming case that my grandkids would use. The holes for mounting the motherboard and power supply would not align. The power supply would not fit the space available in the standard case. The PCI card slots and back panel connectors on the motherboard would be in the wrong place for a standard case, and the power supply connectors would not match those on a standard motherboard.

Eventually, one or more of those non-standard parts will fail, and you won't be able to find a replacement at all, or at least not for a reasonable price. At that point, it makes sense to dispose of the old computer and purchase a new one.

Standard builds

Let's explore what using standardized parts can do for building computers, their longevity, and how that applies to the gaming computers that I am helping my grandkids with.

Most motherboards are standardized. They have standard forms such as micro ATX, ATX, and extended ATX. All of these have mounting holes in standard locations. Many of the locations overlap, so holes for ATX motherboards align with many of the mounting holes used on extended ATX motherboards. This means that you can always use a case that has holes drilled for standard motherboard hole locations for any of those motherboards. These motherboards have standard power connectors, which means you can use them with any standard power supply.

I sent both of my grandkids a gaming computer case that has standardized mounting holes for the motherboards for their birthdays. These holes have standard threads so that they can use the brass standoffs that come with any motherboard in those motherboard mounting holes. The standoffs screw into the motherboard, and themselves have standard threaded holes that fit standard motherboard mounting screws.

The result of all this is that they can install any standard motherboard in any standard case using standard fasteners with any standard power supply.

Note that memory, processors, and add-in cards are all standardized, but they must be compatible with the motherboard. So memory for an old motherboard may no longer be available. You would need a new motherboard, memory, and processor in such a case. But the rest of the computer is still perfectly good.

As I have told Mint and Kasen, building (or purchasing) a computer with standard parts means never having to buy a new computer. The good case I gave them will never need replacement. Over time components may fail, but they only need to replace any defective parts. This continuous renewal of standardized parts will allow those computers to last a lifetime with minimal cost. If one component fails, just replace that one part and recycle the defective one.

This also significantly reduces the amount of material you need to recycle or otherwise add to the landfills.

Recycling old computer parts

I am fortunate to live in a place that provides curbside recycling pickup. Although that curbside pickup does not include electronic devices, multiple locations around the area do take electronics for recycling, and I live close to one. I have taken many loads of old, unusable electronics to that recycling center, including my computers' defective parts. But never an entire computer.

I collect those defective parts in old cardboard boxes, sorted by type—electronics in one, metal in another, batteries in a third, and so on. This corresponds to the collection points at the recycling center. When a box or two get full, I take them for recycling.

Some final thoughts

Even after a good deal of research for this article and my own edification in the past, it is very difficult to determine where the recycled computers and computer parts will go. The website for our recycling center indicates that the outcomes for each type of recycled material get based on its economic value. Computers have relatively large amounts of valuable metals and rare earth elements, so they get recycled.

The issue of whether such recycling gets performed in ways that are healthy for the people involved and the planet itself is another story. So far, I have been unable to determine where electronics destined for recycling go from here. I have decided that I need to do my part while working to ensure the rest of the recycling chain gets set up and functions appropriately.

The best option for the planet is to keep computers running as long as possible. Replacing only defective components as they go bad can keep a computer running for years longer than the currently accepted lifespan and significantly reduces the amount of electronic waste that we dump in landfills or that needs recycling.

And, of course, use Linux so your computers won't slow down.

Don't throw away your old computer. Skip the landfill and revive it with Linux.

Image by:

Opensource.com

Linux Hardware Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

4 cheat sheets I can't live without

Wed, 04/20/2022 - 21:30
4 cheat sheets I can't live without Amrita Sakthivel Wed, 04/20/2022 - 09:30 Up Register or Login to like.

As a technical writer working on OpenShift documentation, I use a number of tools in the documentation workflow. I love cheat sheets, as they are handy references that make my life easier and workflow more efficient.

Cheat sheets help you work smarter. Here is my compilation of four cheat sheets that I find useful.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Atom cheat sheet

Atom is a great open source text editor that I use every day in my documentation work. There is a GitHub repository where you can download a PDF version of the README file providing Atom shortcuts for Windows and Linux. I often use Ctrl+F to search for a specific word or sentence in a file. I also use Ctrl+Shift+F to search through all files in an entire project.

Git cheat sheet

I work in a docs-as-code format, and Git is an open-source distributed version control system. My team uses a Git repository, and I contribute through pull requests. I use Git in my terminal. This insightful Git cheat sheet has a list of handy commands that I use every day.

Linux cheat sheet

A major portion of my work starts at the Linux terminal on my Fedora workstation. I always keep a Linux cheat sheet open for reference. There are also Linux cheat sheets that focus on users and permissions and firewalls.

OpenShift cheat sheet

This cheat sheet for Developers is very handy for learning about OpenShift. This cheat sheet has reminders on how to build, deploy, and manage an application on the OpenShift platform. It has basic commands with examples, and is easy to understand.

More cheating

If you have any cheat sheets that you are especially fond of please send them my way! I am always looking for ways to make my workflow easier and more intuitive.

Cheat sheets help you work smarter. Here are my go-to cheat sheets for open source technology.

Image by:

Opensource.com

Cheat sheets What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get started with NetworkManager on Linux

Wed, 04/20/2022 - 21:25
Get started with NetworkManager on Linux David Both Wed, 04/20/2022 - 09:25 Up Register or Login to like.

Most current Linux distributions use NetworkManager for creating and managing network connections. That means I need to understand it as a system administrator. In a series of articles, I will share what I've learned to date and why I think NetworkManager is an improvement over past options.

Red Hat introduced NetworkManager in 2004 to simplify and automate network configuration and connections, especially wireless connections. The intent was to relieve users from the task of manually configuring each new wireless network before using it. NetworkManager can manage wired network connections without interface configuration files, although it uses network connection files for wireless connections.

In this article, I'll review what NetworkManager is and how to use it to view network connections and devices for Linux hosts. I'll even solve a couple of problems in the process.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles What NetworkManager replaces

NetworkManager is a replacement for previous network management tools. The original interface configuration command, ifconfig, and its interface configuration files are obsolete. You can see this in the ifconfig man pages, which contain a note stating just that.

The ip command replaces the ifconfig command and performs essentially the same tasks. Both commands have coexisted for some time now, allowing Sysadmins to use either one, which keeps scripts dependent upon ifconfig functional. Although its man pages do not yet indicate it, the ifconfig command is obsolete, and NetworkManager has made it so in practice.

It is now time to rewrite those scripts, because using NetworkManager commands makes the most sense.

How NetworkManager works

NetworkManager is run as a systemd service and is enabled by default. NetworkManager works with D-Bus to detect and configure network interfaces as they are plugged into the Linux computer. This plug-and-play management of network interfaces makes plugging into new networks—both wired and wireless—trivially easy for the user. When previously installed network interfaces are detected during Linux startup, they are treated exactly like a device plugged in after the system is already up and running. Treating all devices as plug-and-play in every instance makes handling those devices easier for the operating system, since there is only one code base to deal with both sets of circumstances.

The udev daemon creates an entry for each network interface card (NIC) installed in the system in the network rules file. D-Bus signals the presence of a new network device—wired or wireless—to NetworkManager. NetworkManager then listens to the traffic on the D-Bus and responds by creating a configuration for this new device. Such a configuration is, by default, stored only in RAM and is not permanent. It must be created every time the computer is booted.

NetworkManager uses the information from D-Bus to initialize each NIC. It first looks for configuration files that provide a more permanent static configuration. When notified of a new device, NetworkManager checks for the existence of old network interface configuration files (ifcfg-*) in /etc/sysconfig/network-scripts. The ifcfg-rh plugin allows the use of these legacy files for backward compatibility.

Next, NetworkManager looks for its own interface connection files, located in the /etc/NetworkManager/system-connections directory. Most distributions, including Fedora, keep their network connection files in the /etc/NetworkManager/system-connections directory, using the network's name as the file name.

For example, my System76 Oryx Pro laptop originally used POP!_OS. I have replaced this with Fedora, which is currently at release 35, and each wireless connection I have made with it has a file in /etc/NetworkManager/system-connections. These maintain a record of the service set identifier (SSID) and wireless passwords for each network. The Dynamic Host Configuration Protocol (DHCP) server in the wireless router provides the rest of the network configuration data for these wireless connections. For security purposes, because these files contain passwords, they are read/write only by the root user, just like system account files /etc/passwd and /etc/shadow.

The /etc/NetworkManager/system-connections directory on that laptop contained files for the wired network as well as each of the wireless networks I connected with. The structure of these files is different from the old ifcfg files, but they are in ASCII plain text format and are readable and easily understandable.

This process is sequence sensitive. The first set of configuration files found is used. If no configuration files are found, NetworkManager generates a configuration using the data from a DHCP server. If an interface configuration file does not exist, plugging in a new device or connecting with a new wireless network causes udev to notify NetworkManager of the new device or wireless connection. In Fedora up through release 28, NetworkManager creates the new interface configuration file. Beginning with Fedora 29 and higher, NetworkManager creates only the connection and does not create an interface configuration file.

If no configuration files or DHCP server is found, no network connection is possible.

Viewing interface configuration

NetworkManager's command-line interface program, nmcli, provides several options to determine the current state of any network interface hardware installed in the host as well as currently active connections. The nmcli program can manage networking on any host, whether it uses a graphical user interface (GUI) or not, so it can also manage remote hosts over a secure shell (SSH) connection. It works on both wired and wireless connections.

I'll start with some basic information for using the nmcli tool. I'm using a Fedora system I have set up as a router, since an example with multiple network interfaces will be more interesting than a simple workstation host with only a single interface.

I'll start with the easiest command, nmcli with no options. This simple command shows information similar to the now obsolete ifconfig command, including the name and model of the NIC, the media access control (MAC) address and (internet protocol) IP address, and which NIC is configured as the default gateway. It also shows the DNS configuration for each interface.

The nmcli command requires admin privileges. Most distributions recommend that you use sudo but I just switch to the root user.

$ su -
# nmcli
enp4s0: connected to enp4s0
        "Realtek RTL8111/8168/8411"
        ethernet (r8169), 84:16:F9:04:44:03, hw, mtu 1500
        ip4 default, ip6 default
        inet4 45.20.209.41/29
        route4 0.0.0.0/0
        route4 45.20.209.40/29
        inet6 2600:1700:7c0:860:8616:f9ff:fe04:4403/64
        inet6 fe80::8616:f9ff:fe04:4403/64
        route6 2600:1700:7c0:860::/64
        route6 ::/0

enp1s0: connected to enp1s0
        "Realtek RTL8111/8168/8411"
        ethernet (r8169), 84:16:F9:03:E9:89, hw, mtu 1500
        inet4 192.168.10.1/24
        route4 192.168.10.0/24
        inet6 fe80::8616:f9ff:fe03:e989/64
        route6 fe80::/64

enp2s0: connected to enp2s0
        "Realtek RTL8111/8168/8411"
        ethernet (r8169), 84:16:F9:03:FD:85, hw, mtu 1500
        inet4 192.168.0.254/24
        route4 192.168.0.0/24
        inet6 fe80::8616:f9ff:fe03:fd85/64
        route6 fe80::/64

eno1: unavailable
        "Intel I219-V"
        ethernet (e1000e), 04:D9:F5:1C:D5:C5, hw, mtu 1500

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 192.168.0.52 8.8.8.8 8.8.4.4
        interface: enp4s0

        servers: 192.168.0.52 8.8.8.8
        interface: enp1s0

        servers: 192.168.0.52 8.8.8.8
        interface: enp2s0

Use the command nmcli device show to get complete information about known devices and nmcli connection show to get an overview of active connection profiles. Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details. You can also issue the help command, nmcli -h, as the root user to view the basic top-level nmcli commands:

# nmcli -h
Usage: nmcli [OPTIONS] OBJECT { COMMAND | help }

OPTIONS
  -a, --ask                                ask for missing parameters
  -c, --colors auto|yes|no                 whether to use colors in output
  -e, --escape yes|no                      escape columns separators in values
  -f, --fields <field,...>|all|common      specify fields to output
  -g, --get-values <field,...>|all|common  shortcut for -m tabular -t -f
  -h, --help                               print this help
  -m, --mode tabular|multiline             output mode
  -o, --overview                           overview mode
  -p, --pretty                             pretty output
  -s, --show-secrets                       allow displaying passwords
  -t, --terse                              terse output
  -v, --version                            show program version
  -w, --wait <seconds>                     set timeout waiting for finishing operations

OBJECT
  g[eneral]       NetworkManager's general status and operations
  n[etworking]    overall networking control
  r[adio]         NetworkManager radio switches
  c[onnection]    NetworkManager's connections
  d[evice]        devices managed by NetworkManager
  a[gent]         NetworkManager secret agent or polkit agent
  m[onitor]       monitor NetworkManager changes

Note that the objects can be spelled out or abbreviated down to the first character. These objects are all unique, so only the first character is required to specify any single object.

Try nmcli g to view the general status.

# nmcli g
STATE      CONNECTIVITY  WIFI-HW  WIFI     WWAN-HW  WWAN    
connected  full          enabled  enabled  enabled  enabled

That output does not show very much. I also know that the host, in this case, has no WiFi hardware, so this is a misleading result. You should not use the nmcli g command for that reason. Better object commands are c[onnection] and d[evice], which are the ones I use most frequently.

# nmcli c
NAME         UUID                                  TYPE      DEVICE
enp4s0       b325fd44-30b3-c744-3fc9-e154b78e8c82  ethernet  enp4s0
enp1s0       c0ab6b8c-0eac-a1b4-1c47-efe4b2d1191f  ethernet  enp1s0
enp2s0       8c6fd7b1-ab62-a383-5b96-46e083e04bb1  ethernet  enp2s0
enp0s20f0u7  0f5427bb-f8b1-5d51-8f74-ac246b0b00c5  ethernet  --    
enp1s0       abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  --    
 
# nmcli d
DEVICE  TYPE      STATE        CONNECTION
enp4s0  ethernet  connected    enp4s0    
enp1s0  ethernet  connected    enp1s0    
enp2s0  ethernet  connected    enp2s0    
eno1    ethernet  unavailable  --        
lo      loopback  unmanaged    --        

There is a lot of really interesting information here. Notice that the last two entries using the c object command have no entries in the device column. This result could mean either that they are not active or do not exist or that there are one or more configuration errors.

The additional information we get using the d object command does not even show the enp0s20f0u7 device. It also shows device eno1 (a motherboard device), which was not displayed using the c object command.

Your output should look more like this, though the device name will be different, and it will depend on the specific location on the PCI bus the NIC is connected to.

# nmcli c
NAME                UUID                                  TYPE      DEVICE
Wired connection 1  6e6f63b9-6d9e-3d13-a3cd-d54b4ca2c3d2  ethernet  enp0s3
# nmcli d
DEVICE  TYPE      STATE      CONNECTION        
enp0s3  ethernet  connected  Wired connection 1
lo      loopback  unmanaged  --                

It seems that I have a couple of anomalies to explore. First, I want to know what device enp0s20f0u7 is in the connection list. Since NetworkManager does not recognize this device in the device list, there is probably a network configuration file in /etc/sysconfig/network-scripts even though no such hardware device exists on the host. I checked that directory, found the interface configuration file, and displayed the contents.

# ls -l
total 20
-rw-r--r-- 1 root root 352 Jan  2  2021 ifcfg-eno1
-rw-r--r-- 1 root root 419 Jan  5  2021 ifcfg-enp0s20f0u7
-rw-r--r-- 1 root root 381 Jan 11  2021 ifcfg-enp1s0
-rw-r--r-- 1 root root 507 Jul 27  2020 ifcfg-enp2s0
-rw-r--r-- 1 root root 453 Jul 27  2020 ifcfg-enp4s0

cat ifcfg-enp0s20f0u7
# Interface configuration file for ifcfg-enp0s20f0u7
# This is a USB Gb Ethernet dongle
# This interface is for the wireless routers
# Correct as of 20210105
TYPE="Ethernet"
BOOTPROTO="static"
NM_CONTROLLED="yes"
DEFROUTE="no"
NAME=enp0s20f0u7
UUID="fa2117dd-6c7a-44e0-9c9d-9c662716a352"
ONBOOT="yes"
HWADDR=8c:ae:4c:ff:8b:3a
IPADDR=192.168.10.1
PREFIX=24
DNS1=192.168.0.52
DNS2=8.8.8.8

After looking at this file, I recalled that I had used a USB Gigabit dongle for a while because the motherboard NIC installed on that host had apparently failed. That was a quick fix, and I later installed a new NIC on the PCIe motherboard bus, so I could remove this interface configuration file. I did not delete it, however; I moved it to the /root directory in case I ever need it again.

Notice the comments I used to ensure that my future self or another system administrator would have some understanding of why this file exists.

The second anomaly is why the entry for enp1s0 appears twice. This can only occur when the NIC name is specified in more than one interface configuration file. So I tried the following steps, and sure enough, enp1s0 erroneously appears in the ifcfg-eno1 configuration file as well as the ifcfg-enp1s0 file.

# grep enp1s0 *
ifcfg-eno1:NAME=enp1s0
ifcfg-enp1s0:# Interface configuration file for enp1s0 / 192.168.10.1
ifcfg-enp1s0:NAME=enp1s0

# cat ifcfg-eno1
## Interface configuration file for eno1 / 192.168.10.1
## This interface is for the wireless routers
## Correct as of 20200727
TYPE="Ethernet"
BOOTPROTO="static"
NM_CONTROLLED="yes"
DEFROUTE="no"
NAME=enp1s0
ONBOOT="yes"
HWADDR=04:d9:f5:1c:d5:c5
IPADDR=192.168.10.1
PREFIX=24
DNS1=192.168.0.52
DNS2=8.8.8.8

I changed the NAME to NAME=eno1 and restarted NetworkManager. The changes in the interface configuration files are not activated until I restart NetworkManager. The device and connection results now look like this. I am still not using the onboard NIC, which is probably fine now that I have removed the wrong name from the ifcfg-eno1 interface configuration file. That will require downtime for that router.

# systemctl restart NetworkManager
# nmcli d
DEVICE  TYPE      STATE        CONNECTION
enp4s0  ethernet  connected    enp4s0    
enp1s0  ethernet  connected    enp1s0    
enp2s0  ethernet  connected    enp2s0    
eno1    ethernet  unavailable  --        
lo      loopback  unmanaged    --        
# nmcli c
NAME    UUID                                  TYPE      DEVICE
enp4s0  b325fd44-30b3-c744-3fc9-e154b78e8c82  ethernet  enp4s0
enp1s0  c0ab6b8c-0eac-a1b4-1c47-efe4b2d1191f  ethernet  enp1s0
enp2s0  8c6fd7b1-ab62-a383-5b96-46e083e04bb1  ethernet  enp2s0
eno1    abf4c85b-57cc-4484-4fa9-b4a71689c359  ethernet  --  

Another option is to show only the active connections. This is a good option with clean results, but it can also mask other problems if you use it exclusively.

# nmcli connection show --active
NAME    UUID                                  TYPE      DEVICE
enp4s0  b325fd44-30b3-c744-3fc9-e154b78e8c82  ethernet  enp4s0
enp1s0  c0ab6b8c-0eac-a1b4-1c47-efe4b2d1191f  ethernet  enp1s0
enp2s0  8c6fd7b1-ab62-a383-5b96-46e083e04bb1  ethernet  enp2s0

Having changed the device name in the ifcfg-eno1 file to the correct one, I suspect that the motherboard NIC, eno1, will work again. I will experiment with that the next time I have a maintenance session on that host.

Isn't that more interesting than a host with a single NIC? And I found some problems in the process.

Using NetworkManager tools to manage networking is covered in the Red Hat Enterprise Linux (RHEL) 8 document "Configuring and Managing Networking."

Final thoughts

I am a fan of the "if it ain't broke, don't fix it" philosophy. However, even the simplest use of NetworkManager from the command line, viewing the current state of the network devices and connections, has shown me two anomalies in my previous configurations that I had missed. I am now a fan of NetworkManager. The older tools were good, but NetworkManager is better; the additional information it provides is invaluable.

In part 2 of this series, I will discuss managing network interfaces.

Learn what NetworkManager is and how to use it to view network connections and devices for Linux hosts.

Image by:

opensource.com

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 guides for developing applications on the cloud with Quarkus

Tue, 04/19/2022 - 15:00
7 guides for developing applications on the cloud with Quarkus Daniel Oh Tue, 04/19/2022 - 03:00

Which programming language comes to your mind first for business applications development on the cloud?

If you answered Java, I suggest you experience the benefits of Quarkus.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects Bringing Java to the cloud

Of course, the language you think of first depends on how many years you've been working with application development and which industry you're working for. For example, if you are a novice internet-of-things (IoT) edge application developer, you might prefer to use C/C++ or Python to develop code across the cloud and the edge.

On the other hand, if you have more experience fulfilling business requirements on multiple infrastructures, from physical servers to the cloud, Java was more likely your first thought. More than 15 million Java developers all over the world are still struggling both to improve existing business applications and to write new code for common use cases such as web, mobile, cloud, IoT edge, and AI/ML.

The biggest challenge for Java developers is continuing to evolve their Java skillsets as business applications keep moving toward the cloud. For example, developers need to optimize existing and new business applications on the cloud for better developer experiences, higher performance, and easier cloud deployment. Enhancing these applications in Java is much more efficient than starting over with new programming languages (e.g., Python, Go, PHP, and JavaScript) to implement the use cases above.

Java was designed for high network throughput and dynamic mutable architecture almost 25 years ago. Ironically, those benefits turned into a significant roadblock to bringing Java applications into cloud environments, especially on Kubernetes with the Linux container technology stack.

This article provides multiple resources to help Java developers overcome these challenges and even make existing business applications more cloud friendly by using a new Kubernetes-native Java stack, Quarkus.

Getting started with Quarkus

In case you haven't tried to scaffold a Java project using Quarkus, here are some quickstarts you can use to launch your application development.

Quarkus enables developers to compile a fast-jar on Java Virtual Machine (JVM) and a native executable on GraalVM. Both packages improve Java application performance by enhancing startup time, response time, and memory footprint. The following quickstarts showcase how to build a native executable using Quarkus that optimizes a containerized application to run on the Kubernetes cluster for both microservices and serverless functions.

Helm charts are also one of the preferred ways to standardize the application runtime with regard to build method, Git repository, deployment strategy, and application health check. Quarkus enables Java developers to use a Helm chart to deploy an application in JVM mode and as a native executable. Read the following article to learn how to deploy an application to Kubernetes from scratch using a Quarkus Helm chart.

For IoT edge device development, this article teaches developers how to scale IoT application development by processing reactive data streams on Linux systems using Quarkus as the Java stack.

Last but not least, developers are always looking for better ways to accelerate the development loop in terms of compiling, building, deploying, and testing while code changes locally. However, they also want to expand these experiences to the Kubernetes environment with containerizing applications, remote debugging, remote development, and more. Read the following article to find out how Quarkus solves these challenges for developers in both local and remote Kubernetes clusters.

Conclusion

These articles can teach you how Quarkus enables developers to optimize Java applications for cloud deployment in multiple use cases while also accelerating the development process. For advanced serverless development practices, you can get started with the eBook A guide to Java serverless functions. You can also visit the Quarkus cloud deployment guides.

These resources teach you how to optimize Java applications for the cloud with Quarkus.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Cloud Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 open source tools for people with learning difficulties

Mon, 04/18/2022 - 15:00
3 open source tools for people with learning difficulties Amar Gandhi Mon, 04/18/2022 - 03:00

Disabilities significantly impact people's lives. As someone with dyspraxia and dyslexia, I can tell you that is true. One thing that mitigates my difficulties is the technology I use, such as a screen-reader and task manager. I've set up an ecosystem of sorts that helps me manage a variety of difficulties that I believe could be useful to you whether or not you have dyspraxia or dyslexia. If you love good software and want to improve how you work, then maybe my workflow will be helpful to you, too.

Nextcloud Image by:

(Amar Gandhi, CC BY-SA 4.0)

Nextcloud was the first solution I found out about from a YouTube channel called the Linux Experiment. Nextcloud is a productivity suite you run on your own server. I set mine up on a Raspberry Pi, but you can find preconfigured versions on Linode, Vultr, or Digital Ocean. Nextcloud can replace Microsoft Office 365 and Google Apps (Docs, Drive, and so on) while also being encrypted, private, and entirely under your control.

Nextcloud, on its own, is basically a file manager and text editor on the Internet. However, because it's structured around plug-ins, it has what is essentially an app store (except that all the apps are free). You can install an office suite (powered by Collabora Office), task manager, contacts, calendar, notes (similar to Notion), podcast application, music player, video conferencing, chat, and much more.

One of the standout things about Nextcloud is its dashboard, where you can see all of your information at once. The dashboard reminds me a little of the Windows 8 start menu, which many people liked. I think Nextcloud's version is more aesthetically pleasing than the old Windows 8 start menu. This is important because it allows me to see my information at a glance. The dashboard lets me take in a lot of data at once and then decide my next course of action.

You can use your Nextcloud environment on any device because its APIs and applications are available on every platform. You can access it from anywhere and store all of your files in one place.

One way that it can work with any operating system is over WebDAV, a technology that uses HTTP and HTTPS to connect to a remote server over standard Internet ports (80 and 443). This means you can add data from Nextcloud to any appropriate application on any operating system, such as a calendaring app to manage your day or a file manager to view files saved on your Nextcloud server.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

You can also use Nextcloud as a Progressive Web Application, meaning you can install your Nextcloud website as an application on most operating systems (except for Safari on iOS, which does not support web applications). Many operating systems, particularly Windows, Chrome OS, and Linux, treat web applications as native applications. The result is that you can have notifications of new activity on your Nextcloud just as if it was a local desktop application. It also means that Nextcloud's task manager and notes app can follow you everywhere, regardless of what device you're using.

Some operating systems offer integration between your phone and your desktop, allowing you to view Android apps and notifications on your Windows device and even being able to respond to messages and calls on your desktop. For me, the difficulty with this is that my brain gets accustomed to one device, and I forget how notifications and applications work on another device. Nextcloud, however, allows you to have location and SMS notifications between Nextcloud and your Android device. The impact is that you can do all your personal work on one single web application that doesn't change.

This also prevents me from being overwhelmed with multiple tabs. Modern web browsers can have upwards of a hundred tabs, but it's difficult sometimes to remember what each tab is for. With Nextcloud, you can access many apps within one tab.

Photoprism Image by:

(Amar Gandhi, CC BY-SA 4.0)

Photoprism is an open source photo gallery and storage repository that relies on Google's TensorFlow technology, which is also used in the Google Photos application. I use Photoprism to store my photos because it tells me the date, place, time, and device the photos were taken on at a glance.

Photoprism is accessible as a Progressive Web Application, which means it's possible to access any device on every device. As with Nextcloud, it's also possible to access using a WebDAV client, even if your device doesn't allow web applications. This enables most devices to treat your Photoprism instance as a native application, so you can upload and download photos directly to and from your device. The interface is the same regardless of which device it is used on, allowing those with learning difficulties to develop muscle memory for each of the applications mentioned here, making them far easier to use. You can use Photoprism on most devices.

Photoprism is available on preconfigured Digital Ocean, or you can self-host (primarily as a container application, so be sure to read up on containers before attempting).

OpenDyslexic Font Image by:

(Amar Gandhi, CC BY-SA 4.0)

OpenDyslexic font is an open source font that uses evenly spaced letters and an italic typeface. It aims to make it easier for people with dyslexia to read the direction of the letters such as "a" and "d" by using a weighted bottom.

Of course, whether OpenDyslexic improves readability for you depends entirely on your own perception. It doesn't work for everyone, but there are many open source fonts out there, and it can pay dividends to find a font that works well for you.

Open source accessibility

Making applications that work well for users with learning disabilities, physical disabilities, or just users who have preferences a little different from another user makes open source better for everyone. The ability to customize applications is one of the great strengths of open source. If you're a developer making more options possible, keep up the great work. If you're a user benefitting from all the choices you have in open source applications, let us know about what you use.

Image by:

Opensource.com

Tools Diversity and inclusion Nextcloud Accessibility What to read next Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder How I use Linux accessibility settings This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What Linux users and packagers need to know about Podman 4.0 on Fedora

Mon, 04/18/2022 - 15:00
What Linux users and packagers need to know about Podman 4.0 on Fedora Lokesh Mandvekar Mon, 04/18/2022 - 03:00

 

 

The newly released Podman 4.0 features a complete rewrite of the network stack based on Netavark and Aardvark, which will function alongside the existing Container Networking Interface (CNI) stack.

Netavark is a Rust-based tool for configuring networking for Linux containers that serves as a replacement for CNI plugins (containernetworking-plugins on Fedora). Aardvark-dns is now the authoritative DNS server for container records. Along with the new stack comes distro packaging changes along with repository availability changes for Fedora 35.

Linux Containers What are Linux containers? What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: A guide to Kubernetes for SREs and sysadmins Free online course: Running containers with Red Hat technical overview eBook: Storage patterns for Kubernetes For Fedora users

Podman v4 is available as an official Fedora package on Fedora 36 and Rawhide. Both Netavark and Aardvark-dns are available as official Fedora packages on Fedora 35 and newer versions and form the default network stack for new installations of Podman 4.0.

On Fedora 36 and newer, fresh installations of Podman v4 will automatically install Aardvark-dns along with Netavark.

To install Podman v4:

$ sudo dnf install podman

To update Podman from an older version to v4:

$ sudo dnf update podman

Because Podman v4 features some breaking changes from Podman v3, Fedora 35 users cannot install Podman v4 using the default repositories. However, if you're eager to give it a try, you can use a Copr repository instead:

$ sudo dnf copr enable rhcontainerbot/podman4

# install or update per your needs
$ sudo dnf install podman

After installation, if you would like to migrate all your containers to use Netavark, you must set network_backend = "netavark" under the [network] section in your containers.conf, typically located at /usr/share/containers/containers.conf.

Testing the latest development version

If you would like to test the latest unreleased upstream code, try the podman-next Copr:

$ sudo dnf copr enable rhcontainerbot/podman-next

$ sudo dnf install podman

CAUTION: The podman-next Copr provides the latest unreleased sources of Podman, Netavark, and Aardvark-dns as RPM Package Managers (RPMs). These will override the versions supplied by the official packages.

For Fedora packagers

The Fedora packaging sources for Podman are available in Fedora's repository for package maintenance. The main Podman package no longer explicitly depends on containernetworking-plugins. The network stack dependencies are now handled in the containers-common package, which allows for a single point of dependency maintenance for Podman and Buildah.

- containers-common
Requires: container-network-stack
Recommends: netavark

- netavark
Provides: container-network-stack = 2

- containernetworking-plugins
Provides: container-network-stack = 1

This configuration ensures that:

  • New installations of Podman will always install Netavark by default.
  • The containernetworking-plugins package will not conflict with Netavark, and users can install them together.
Listing bundled dependencies

If you need to list the bundled dependencies in your packaging sources, you can process the go.mod file in the upstream source. For example, Fedora's packaging source uses:

$ awk '{print "Provides: bundled(golang("$1")) = "$2}' go.mod | \
sort | uniq | sed -e 's/-/_/g' -e '/bundled(golang())/d' -e '/bundled(golang(go\
|module\|replace\|require))/d'Netavark and Aardvark-dns

The .tar vendored sources for Netavark and Aardvark-dns will be attached as an upstream release artifact. Then you can create a Cargo config file to point it to the vendor directory:

tar xvf %{SOURCE}
mkdir -p .cargo
cat >.cargo/config << EOF
[source.crates-io]
replace-with = "vendored-sources"

[source.vendored-sources]
directory = "vendor"
EOF

The Fedora packaging sources for Netavark and Aardvark-dns are also available in the Fedora Project's repository.

The Fedora packaged versions of the Rust crates that Netavark and Aardvark-dns depend on are frequently out of date (for example, rtnetlink, sha2, zbus, and zvariant) at the time of initial package creation. As a result, Netavark and Aardvark-dns are built using the dependencies vendored upstream, found in the vendor subdirectory.

The netavark binary is installed to /usr/libexec/podman/netavark, while the aardvark-dns binary is installed to /usr/libexec/podman/aardvark-dns.

The netavark package has a Recommends on the aardvark-dns package. The aardvark-dns package will be installed by default with Netavark, but Netavark will be functional without it.

Listing bundled dependencies

If you need to list the bundled dependencies in your packaging sources, you can run the cargo tree command in the upstream source. For example, Fedora's packaging source uses:

$ cargo tree --prefix none |  \
awk '{print "Provides: bundled(crate("$1")) = "$2}' | \
sort | uniqTo learn more

I hope you found these updates helpful. If you have any questions please feel free to open a discussion on GitHub, or contact me or the other Podman maintainers through Slack, IRC, Matrx, or Discord. Better still, we’d love for you to join our community as a contributor!

New Podman features offer better support for containers and improved performance.

Image by:

(Máirín Duffy, CC BY-SA 4.0)

Linux Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages