opensource.com

Subscribe to opensource.com feed
Updated: 1 hour 55 sec ago

Listen to music on Linux with Rhythmbox

Sat, 07/16/2022 - 15:00
Listen to music on Linux with Rhythmbox Jim Hall Sat, 07/16/2022 - 03:00 Register or Login to like Register or Login to like

It's hard for me to work in total silence. I need some kind of background noise, preferably some familiar music. My music-listening needs are pretty simple: I just need a music player that plays my library of MP3 music and streams from a few websites I like to listen to.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I've tried a variety of music players on Linux, but I keep coming back to Rhythmbox. Rhythmbox is a music-playing application for GNOME. If your distribution uses GNOME, it probably also includes Rhythmbox. It's simple and plays my local music library as well as streams from internet radio websites. I like to listen to both streaming music and my own music library with Rhythmbox on Linux.

Listen to streaming music on Linux

Rhythmbox supports listening to music from several streaming services. If you have a Last.fm or Libre.fm account, you can click the tab on the left to log in. Or, if you want to listen to streaming radio stations, click the Radio tab on the left to stream from one of the pre-configured internet radio websites.I usually like to listen to trance music while I'm writing code, and HBR1 Tranceponder is one of my favorite Internet radio stations:

Image by:

Streaming HBR1 Tranceponder in Rhythmbox (image: Jim Hall, license: CC BY SA)

Listen to my music library on Linux

I've collected a large MP3 music library over the years. Since the MP3 patents expired in the US several years ago, it is an open music format that plays well with Linux.

I keep my 20-gigabyte MP3 music library outside my home directory, in /usr/local/music. To import music into Rhythmbox, click the Import button, select the /usr/local/music directory, or wherever you've saved your music library, and let Rhythmbox identify the MP3 music collection. When it's done, click the Import listed tracks button to complete the import process.

Image by:

Rhythmbox starts with an empty music library. Click the Import button to add music to your library. (Jim Hall, CC BY SA)

Image by:

After Rhythmbox identifies the new music files, you can add them to your library (Jim Hall, CC BY SA)

Rhythmbox plays my music collection and organizes songs by genre, artist, and album so I can quickly find the music I want to listen to.

Image by:

Listening to my music library in Rhythmbox (Jim Hall, CCY BY SA)

The beat goes on

I like Rhythmbox as my music player on Linux because it's simple and stays out of my way. And listening to music helps me tune out everyday noise, making my day go by just a bit faster.

Here's how I like to listen to streaming music and MP3 playlists with Rhythmbox on GNOME with Linux.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Linux Audio and music What to read next How I create music playlists on Linux This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 open source GUI disk usage analyzers for Linux

Fri, 07/15/2022 - 15:00
3 open source GUI disk usage analyzers for Linux Don Watkins Fri, 07/15/2022 - 03:00 1 reader likes this 1 reader likes this

Several great options for checking disk usage on your Linux system have a graphical interface. Sometimes the visual representation of disk utilization is easier or newer users may not be as familiar with the various Linux commands that display storage information. I am a person who comprehends visual representations more easily than the printout on the command line.

Here are several excellent GUI-based tools to help you understand how your storage capacity is used.

GNOME Disk Usage Analyzer

My Pop!_OS system relies on the GNOME Disk Usage Analyzer, and they call it "Disk Usage Analyzer."

The GNOME Disk Usage Analyzer is also known as Baobab. It scans folders and devices, then reports the disk space used by each item. The graphical representation below is a report on my home directory. I can drill down into each directory by clicking on that item to learn more about the details of the files it contains.

Image by:

(Don Watkins, CC BY-SA 4.0)

I clicked on my Downloads directory to display how much space files in that directory are consuming on my system.

Image by:

(Don Watkins, CC BY-SA 4.0)

GNOME Disk Usage Analyzer is licensed with GPL 2.0. It is under continuous development; the latest release was in September 2021.

Filelight

There is another graphical option for the KDE desktop. It is called Filelight, and it provides an interesting graphic of your Linux system. Initially released in 2004, the project has been under continual development. Its latest release was in December 2021, and the source code is available on GitHub under the GNU Free Document License.

Here is a snapshot of my Linux laptop using Filelight.

Image by:

(Don Watkins, CC BY-SA 4.0)

QDirStat

A third graphical option to consider is QDirStat. It is licensed with GPL v. 2.0 and can be installed on all Linux systems.

According to its developers, "QDirStat is a graphical application to show where your disk space has gone and to help you to clean it up." QDirStat is available in packages for Debian, Ubuntu, Fedora, Arch, Manjaro, and SUSE.

Image by:

(Don Watkins, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I easily installed QDirStat from the command line. It has an intuitive interface and provides a percentage of utilization of your file system.

The terminal

Of course, if you don't enjoy graphical applications or need text output for a script, there are commands that analyze disk usage, too. The du and ncdu commands are easy to use and provide a different view (but the same information) of your file system.

Wrap up

Today's storage devices are immense, but it is still necessary to be aware of how that capacity is used on your system. Whether you prefer command-line utilities or GUI tools, there are plenty of options available for Linux. Don't let storage space issues get you down—start using these tools today!

For people who prefer visual representations, these GUI-based tools help you understand how your storage capacity is used.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How one European bank embraces open source

Thu, 07/14/2022 - 15:00
How one European bank embraces open source Ben Rometsch Thu, 07/14/2022 - 03:00 1 reader likes this 1 reader likes this

I sat down to talk about open source and development with Jindrich Kubat, the Head of Development and the Chief of Engineers (COE) at Komerční banka (KB) in the Czech Republic.

Ben Rometsch: What is your official role at KB?

Jindrich Kubat: My official role is Head of Development, COE (center of expertise) at Komerční banka.

Ben: What exactly is your area of responsibility and day-to-day?

Jindrich: As the Development COE lead, I consider the expertise, structure and capabilities needed to make software development successful across hundreds of developers. We have roughly 600 people in the broader development team, but I'm not directly responsible for managing them. I'm responsible for defining how they will do their job. So for development best practices, standards unification and so on.

Ben: What’s your main focus today at KB?

Jindrich: Our transformation project. We have the current bank, which is built on typical legacy banking systems that are huge, distributed monolithic architecture systems. That is where most of our developers (around 400) are working day-to-day as it generates the bulk of the revenue for the bank.

Alongside the legacy applications, we decided to completely modernize to something that we call "Digital Hub" which is replacing the bank's current infrastructure. The front-end applications are native mobile and web applications.

The Digital Hub and new front-end applications are built entirely on microservices so we have to redesign a lot of things and introduce new guidelines for people. All of this together is what we call the "New Digital Bank". It's a massive project!

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Ben: Does this transformation stop at some point?

Jindrich: We think about the transformation in two main parts:

  1. "The New Digital Bank" initiative is planned for 5 years and we are about 2.5 years into it already. It started small with a few teams that were ready to adopt the most modern development approaches and it’s growing every year. Today we have about 180 engineers split among 50 teams. Those teams have been hired externally or transferred over from the legacy systems. They are dedicated 100% to “The New Bank” effort.
  2. The other side is culture transformation. We call this “Agile 2.0” which hits the whole team (both legacy systems and new systems teams). Adopting Agile and DevOps is all about adopting a new way of working. This shift is something that we are driving with our legacy system teams as well. While the building will come to an end at some point, the hope is that the new culture will take hold and be the new way of working at KB.

[ Read next: Agile adoption: 6 strategic steps ]

Ben: Beyond agile, have you adopted any new approaches?

Jindrich: Yes, there are three new approaches that we adopted:

  1. Moving to OKRs which aligns well to Agile.
  2. We adopted a collaborative chat application for all meetings, which has helped with remote work.
  3. We have functional "tribes" containing people and skills that each tribe needs to develop and run its own projects or its own applications. This aligns well to the microservice architecture we're moving towards.

Ben: What about tooling, languages and infrastructure? How do you manage that across so many people and teams?

Jindrich: As the COE, we are mostly responsible for the unification of how people work. We are pretty rigorous in this. We developed our own KB framework, which we call Speed which is divided into three parts:

  1. First is a Java SDK, built on top of Spring Boot, which is a framework for building cloud applications.
  2. Then CI and CD Pipelines are used to automate deployment. This is built on Jenkins and Argo CD.
  3. Kubernetes is used where we are deploying and running our applications.

We built our own cloud infrastructure, so everything needs to be on-premise today. If a team needs something, they need to ask for it and it is reviewed by my team. If we have something that solves their problem, then we ask them to use that. If not, I’m open to adopting a new technology.

We started with the Speedy Framework two or three years ago and it is still evolving. The first version was built for monolithic applications and monolithic works. We are on version 4 now. The microservice approach was implemented in version 3.

The disadvantage is that we have to follow some product updates and life cycles which is really painful for us. Kubernetes, for example, has three major versions per year.

Going forward we want to make it more independent of these cycles and automate upgrades in the future. Interestingly, our parent company adopted OpenShift, which we are keeping an eye on. We’ll see if we unify the frameworks or not in the future.

Ben: It sounds like you are using a lot of open source technologies in the new platform. Have you always done that?

Jindrich Kubat: No, we used a lot of the typical products like Oracle and Sun Microsystems. For the new digital bank we decided to adopt more open source technologies like Kafka and Flagsmith. This is great because they are free, but we are responsible for keeping it up and running. This is a completely new mindset for teams.

Ben: Did you have to fight to use those products internally?

Jindrich: No, they sort of just accepted that because they know the reasons. If we show them the contract with Oracle, how expensive it is, there’s a big business case to adopt open source. Even if you have to have a few more people to manage the infrastructure, it has a positive impact on our costs.

Ben: What about feature flags? Was that something that you knew you needed before you started?

Jindrich: That's a good question because, you know, even in current banks (legacy systems), people are using feature flags, but they’ve developed their own systems. They’re very simple systems that manage the configuration file. And it was just because the legacy system was only deployed 3 times per year. So you can imagine how difficult it was to keep all of the promises from other teams. So they were really just there to keep the enterprise release management and testing on time and coordinated. It was easy to switch on or switch off some features that weren't ready for deployment.

But for the New Digital Bank we have microservices and continuous integration. We deploy on a daily basis to non-production environments. Interestingly, we only have two non-production environments. One of them is only for testing during the build and the other is the final non-production environment (staging).

Ben: Was it a conscious decision to have fewer environments?

Jindrich: Yes, this is an approach I brought to the bank where we completely redesigned some environments, and how we deploy them. The goal is to get to production as fast as possible to keep production and non-production as close as possible.

Ben: The "holy grail" is to have just one environment— production. Surely that's not possible in your business?

Jindrich: I have some experience with that, where you just test everything in production. But in banking, that's impossible. A lot of our developers can't even access the production environment. There are regulations. For example, one regulation says that you can't have a testing account in production because you are operating with real money. Everything must be "real" in production.

Ben: What do you do to mitigate that?

Jindrich: You have the smallest number of non-production environments as possible.

Ben: When you were selecting a feature flagging system, did you consider SaaS services or was that something that you immediately ruled out?

Jindrich: Yeah, we decided to evaluate three systems. Flagsmith, LaunchDarkly, and building it ourselves. We did three “proof of concepts”. First was with LaunchDarkly, then Flagsmith and then we looked at our own homebuilt system. We decided on Flagsmith not just because of the system’s flexibility, but also the great support. The fact that you guys are open source and the great documentation also helped in our decision.

Ben: How are things going today with feature flags?

Jindrich: We have been using feature flags for one year in our non-production environments which has been working great. As for the production environment, it took a while from a security perspective to implement feature flags in our production environment. The reason it took so long is because the person originally responsible for bringing feature flags to KB left. Then the security team changed their criteria for evaluation. Finally on the third time, we got it into production after passing all of the penetration testing and requirements!

Ben: Making changes at a bank is not for the faint of heart!

Jindrich: Yes, because of the security standards the entire banking industry is really sensitive to this. Now we are up and running, focusing on the permission models, and logging everything so it is auditable.

Ben: Has it changed your workflow?

Jindrich: Yes! We decided to start with the frontend, especially mobile teams. People have been waiting for this capability because they’ve used feature flags in previous roles. We have people on the team pushing to adopt because without a feature flag system, everything has to wait until it is fully finished. Now they can deploy to production and more teams are onboarding to The New Bank. For those teams in a microservice set-up, it is crucial for teams to be more independent.

So far, we have about 85 engineers working with Flagsmith in production. That is almost half of The New Digital Bank already!

Ben: Are you using flags on server-side as well or just the frontend?

Jindrich: So far just on the frontend, but we are getting ready to roll that out with the backend systems that leverage microservices. That will be very soon!

Ben: Are there any ways you want to use Flagsmith in the future?

Jindrich: Yes. We're looking at integrating the Flagsmith API into our bug tracker, being able to drive things upstream, and be able to keep the QA teams aligned with the product owners.

My interview with the Head of Development and Chief of Engineers at Komerční banka reveals how they are harnessing open source technologies.

Image by:

Opensource.com

Business What to read next 3 practical tips for agile transformation This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 ways to learn C programming on Linux

Thu, 07/14/2022 - 15:00
5 ways to learn C programming on Linux Alan Smithee Thu, 07/14/2022 - 03:00 Register or Login to like Register or Login to like

There are many theories about why the C programming language has endured for as long as it has. Maybe it's the austerity of its syntax or the simplicity of its vocabulary. Or maybe it's that C is often seen as a utilitarian language, something that's rugged and ready to be used as a building material for something that needs no platform because it's going to be its own foundation. C is clearly a powerful language, and I think its longevity has a little something to do with the way it serves as a springboard for other popular technologies. Here are five of my favorite technologies that utilize and rely upon C, and how they can each help you learn more about C yourself.

[ Download the eBook: A guide to tips and tricks for C programming ]

1. GObject and GTK

C is not an object-oriented programming language. It has no class type. Some folks use C++ for object-oriented programming, but others stick with C along with the GObject libraries. The GObject subsystem provides a class structure for C, and the GTK project famously provides widgets accessible through C. Without GTK, there would be no GIMP (for which GTK was developed), GNOME, and hundreds of other popular open source applications.)

Learn more

GObject and GTK are excellent ways to start using C for GUI programming. They're well-equipped to get you programming graphical applications using C because they do so much of the "heavy lifting" for you. The classes and data types are defined, the widgets have been made, and all you have to do is put everything together.

2. Ncurses

If GTK is more than you need, you might decide a terminal user interface (TUI) is more your speed. The ncurses library creates "widgets" in a terminal, creating a kind of graphical application that gets drawn over your terminal window. You can control the interface with your arrow keys, selecting buttons and elements much the same way you might use a GUI application without a mouse.

Learn more

Get started by writing a guessing game in C using the ncurses library as your display.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 3. Lua and Moonscript

Lua is a scripting language with access to C libraries through a built-in C API. It's a tiny, fast, and simple language with about 30 functions and just a handful of built-in libraries. You can get started with Lua for system automation, game modding and scripting, game development with a frontend like LÖVE, or general application development (like the Howl text editor) using GTK.

Learn more

The nice thing about Lua is that you can start out with it to learn the basic concepts of programming, and then start exploring its C API when you feel brave enough to interface directly with the foundational language. If, on the other hand, you never grow out of Lua, that's OK too. There's a wealth of extra libraries for Lua to make it a great choice for all manner of development.

4. Cython

Lua isn't the only language that interfaces with C. Cython is a compiler and language designed to make writing C extensions for Python as easy as writing Python code. Essentially, you can write Python and end up with C. The simplest possible example:

print("hello world")

Create a setup.py script:

from setuptools import setup from Cython.Build import cythonize setup( ext_modules = cythonize("hello.pyx") )

Run the setup script:

$ python3 ./setup.py

And you end up with a hello.c and hello.cpython-39-x86_64-linux-gnu.so file in the same directory.

Learn more

The Cython language is a superset of Python with support for C functions, and datatypes. It isn't likely to directly help you learn C, but it opens up new possibilities for the Python developer looking to learn and integrate C code into Python.

5. FreeDOS

The best way to learn more about C is to write code in C, and there's nothing more exciting than writing code you can actually use. The FreeDOS project is an open source implementation of DOS, the predecessor to Windows. You may have already used FreeDOS, either as a handy open source method of running a BIOS updater, or maybe in an emulator to play a classic computer game. You can do a lot more with FreeDOS than that, though. It makes an ideal platform to learn C with a collection of tools that encourage you to write your own commands and simple (or not-so-simple, if you prefer) applications. Of course you can write C code on any OS, but there's a simplicity to FreeDOS that you might find refreshing. The sky's the limit, but even at ground level, you can do some amazingly fun things with C.

Download the eBook

You can learn more about C in our new eBook, and more about C on FreeDOS in our eBook. These are collections of programming articles to help you learn C and to demonstrate how you can implement C in useful ways.

Download our new eBook for tips and tricks for C programming on Linux and FreeDOS.

Linux Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to productivity management in open source projects

Wed, 07/13/2022 - 15:00
A guide to productivity management in open source projects Thabang Mashologu Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

Open source is one of the most important technology trends of our time. It’s the lifeblood of the digital economy and the preeminent way that software-based innovation happens today. In fact, it’s estimated that over 90% of software released today contains open source libraries.

There's no doubt the open source model is effective and impactful. But is there still room for improvement? When comparing the broader software industry’s processes to that of open source communities, one big gap stands out: productivity management.

By and large, open source project leads and maintainers have been slow to adopt modern productivity and project management practices and tools commonly embraced by startups and enterprises to drive the efficiency and predictability of software development processes. It’s time we examine how the application of these approaches and capabilities can improve the management of open source projects for the better.

Understanding productivity in open source software development

The open source model, at its heart, is community-driven. There is no single definition of success for different communities, so a one-size-fits-all approach to measuring success does not exist. And what we have traditionally thought of as productivity measures for software development, like commit velocity, the number of pull requests approved and merged, and even the lines of code delivered, only tell part of the story.

Open source projects are people-powered. We need to take a holistic and humanistic approach to measuring productivity that goes beyond traditional measures. I think this new approach should focus on the fact that great open source is about communication and coordination among a diverse community of contributors. The level of inclusivity, openness, and transparency within communities impacts how people feel about their participation, resulting in more productive teams.

These and other dimensions of what contributes to productivity on open source teams can be understood and measured with the SPACE framework, which was developed based on learnings from the proprietary world and research conducted by GitHub, the University of Victoria in Canada, and Microsoft. I believe that the SPACE framework has the potential to provide a balanced view of what is happening in open source projects, which would help to drive and optimize collaboration and participation among project team members.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews A more accurate productivity framework

The SPACE framework acronym stands for:

  • Satisfaction and well-being
  • Performance
  • Activity
  • Communication and collaboration
  • Efficiency and flow

Satisfaction and well-being refer to how fulfilled developers feel with the team, their tools, and the environment, as well as how healthy and happy they are. Happiness is somewhat underrated as a factor in the success of teams. Still, there is strong evidence of a direct correlation between the way people feel and their productivity. In the open source industry, surveying contributors, committers, and maintainers about their attitudes, preferences, and priorities about what is being done and how is essential to understanding attitudes and opinions.

Performance in this context is about evaluating productivity in terms of the outcomes of processes instead of output. Team-level examples are code-review velocity (which captures the speed of reviews) and story points shipped. More holistic measures focus on quality and reliability. For example, was the code written in a way that ensures it will reliably do what it is supposed to do? Are there a lot of bugs in the software? Is industry adoption of the software growing?

Open source activity focuses on measuring design and development and CI/CD metrics, like build, test, deployments, releases, and infrastructure utilization. Example metrics for open source projects are the number of pull requests, commits, code reviews completed, build releases, and project documents created.

Communication and collaboration capture how people and teams work together, communicate, and coordinate efforts with high transparency and awareness within and between teams. Metrics in this area focus on the vibrancy of forums, as measured by the number of posts, messages, questions asked and answered, and project meetings held.

Finally, efficiency and flow refer to the ability to complete work and progress towards it with minimal interruptions and delays. At the individual developer level, this is all about getting into a flow that allows complex tasks to be completed with minimal distractions, interruptions, or context switching. At the project team level, this is about optimizing flow to minimize the delays and handoffs that take place in the steps needed to take software from an idea or feature request to being written into code. Metrics are built around process delays, handoffs, time on task, and the ease of project contributions and integrations.

Applying the SPACE framework to open source teams

Here are some sample metrics to illustrate how the SPACE framework could be used for an open source project.

Satisfaction and well-being
  • Contributor satisfaction
  • Community sentiment
  • Community growth & diversity
Performance
  • Code review velocity
  • Story points shipped
  • Absence of bugs
  • Industry adoption
Activity
  • number of pull requests
  • number of commits
  • number of code reviews
  • number of builds
  • number of releases
  • number of docs created
Communication and collaboration
  • Forum posts
  • Messages
  • Questions asked & answered
  • Meetings
Efficiency and flow
  • Code review timing
  • Process delays & handoffs
  • Ease of contributions/integration
Tools for managing open source projects must be fit for purpose

There is an opportunity to leverage the tools and approaches startups and high-growth organizations use to understand and improve open source development efficiency. All while putting open source’s core tenets, like openness and transparency, into practice.

Tools used by open source teams should enable maintainers and contributors to be productive and successful, while allowing the projects to be open and welcoming to everyone, including developers who may work in multiple organizations and even competing companies. It is also critical to provide an excellent onboarding experience for new contributors and accelerate their time-to-understanding and time-to-contribution.

Tools for managing open source projects should transparently manage data and accurately reflect project progress based on where the collaboration happens: in the codebase and repositories. Open source teams should be able to see real-time updates based on updates to issues and pull requests. And, project leads and maintainers should have the flexibility to decide whether access to the project should be completely public or if it should be limited to trusted individuals for issues or information of a more sensitive nature.

Ideally, tools should allow self-governed project teams to streamline coordination, processes, and workflows and eliminate repetitive tasks through automation. This reduces human friction and empowers maintainers and contributors to focus on what really matters: contributing to the ecosystem or community and delivering releases faster and more reliably.

The tools teams use should also support collaboration from people wherever they are. Since open source teams work in a remote and asynchronous world, tools should be able to integrate everyone’s contributions wherever and whenever they occur. These efforts should be enabled by great documentation stored in a central and easily accessible place. And finally, the tools should enable continuous improvement based on the types of frameworks and measures of productivity outlined above.

Features that allow for increased transparency are especially important for open source projects. Tools should help keep community members aligned and working towards a common goal with a project roadmap that shows work is in flight, progress updates, and predicted end dates.

Conclusion

Open source projects are a benefit to us all, and as such, it benefits everyone to make the processes that exist within these projects as productive as possible.

By leveraging concepts like the SPACE framework and modern tools, we can ditch the spreadsheets and manual ways of tracking, measuring, and improving productivity. We can adapt approaches that power software development in the proprietary world and leverage modern tools that can help increase the quality, reliability, and predictability of open source software releases. Open source is far too important to leave to anything less.

Enhance productivity by applying the SPACE framework to open source teams.

Image by:

opensource.com

Careers Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I create music playlists on Linux

Wed, 07/13/2022 - 15:00
How I create music playlists on Linux Rikard Grossma… Wed, 07/13/2022 - 03:00 2 readers like this 2 readers like this

I recently wrote a C program in Linux to create a smaller random selection of MP3 files from my extensive MP3 library. The program goes through a directory containing my MP3 library, and then creates a directory with a random, smaller selection of songs. I then copy the MP3 files to my smartphone to listen to them on the go.

Sweden is a sparsely populated country with many rural areas where you don't have full cell phone coverage. That's one reason for having MP3 files on a smartphone. Another reason is that I don't always have the money for a streaming service, so I like to have my own copies of the songs I enjoy.

You can download my application from its Git repository. I wrote it for Linux specifically in part because it's easy to find well-tested file I/O routines on Linux. Many years ago, I tried writing the same program on Windows using proprietary C libraries, and I got stuck trying to get the file copying routing to work. Linux gives the user easy and direct access to the file system.

In the spirit of open source, it didn't take much searching before I found file I/O code for Linux to inspire me. I also found some code for allocating memory which inspired me. I wrote the code for random number generation.

The program works as described here:

  1. It asks for the source and destination directory.
  2. It asks for the number of files in the directory of MP3 files.
  3. It searches for the percentage (from 1.0 to 88.0 percent) of your collection that you wish to copy. You can also enter a number like 12.5%, if you have a collection of 1000 files and wish to copy 125 files from your collection rather than 120 files. I put the cap at 88% because copying more than 88% of your library would mostly generate a collection similar to your base collection. Of course, the code is open source so you can freely modify it to your liking.
  4. It allocates memory using pointers and malloc. Memory is required for several actions, including the list of strings representing the files in your music collection. There is also a list to hold the randomly generated numbers.
  5. It generates a list of random numbers in the range of all the files (for example, 1 to 1000, if the collection has 1000 files).
  6. It copies the files.

Some of these parts are simpler than others, but the code is only about 100 lines:

#include #include #include #include /* include necessary header files */ #include #include #include #include #define BUF_SIZE 4096 /* use buffer of 4096 bytes */ #define OUTPUT_MODE 0700 /*protect output file */ #define MAX_STR_LEN 256 int main(void) { DIR *d; struct dirent *dir; char strTemp[256], srcFile[256], dstFile[256], srcDir[256], dstDir[256]; char **ptrFileLst; char buffer[BUF_SIZE]; int nrOfStrs=-1, srcFileDesc, dstFileDesc, readByteCount, writeByteCount, numFiles; int indPtrFileAcc, q; float nrFilesCopy; //vars for generatingRandNumList int i, k, curRanNum, curLstInd, numFound, numsToGen, largNumRange; int *numLst; float procFilesCopy; printf("Enter name of source Directory\n"); scanf("%s", srcDir); printf("Enter name of destionation Directory\n"); scanf("%s", dstDir); printf("How many files does the directory with mp3 files contain?\n"); scanf("%d", &numFiles); printf("What percent of the files do you wish to make a random selection of\n"); printf("enter a number between 1 and 88\n"); scanf("%f", &procFilesCopy); //allocate memory for filesList, list of random numbers ptrFileLst= (char**) malloc(numFiles * sizeof(char*)); for (i=0; id_name); if(strTemp[0]!='.'){ nrOfStrs++; strcpy(ptrFileLst[nrOfStrs], strTemp); } } closedir(d); } for(q=0; q<=curLstInd; q++){ indPtrFileAcc=numLst[q]; strcpy(srcFile,srcDir); strcat(srcFile, "/"); strcat(srcFile,ptrFileLst[indPtrFileAcc]); strcpy(dstFile,dstDir); strcat(dstFile, "/"); strcat(dstFile,ptrFileLst[indPtrFileAcc]); srcFileDesc = open(srcFile, O_RDONLY); dstFileDesc = creat(dstFile, OUTPUT_MODE); while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); } //close the files close(srcFileDesc); close(dstFileDesc); } }

This code is possibly the most complex:

while(1){ readByteCount = read(srcFileDesc, buffer, BUF_SIZE); if(readByteCount<=0) break; writeByteCount = write(dstFileDesc, buffer, readByteCount); if(writeByteCount<=0) exit(4); }

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

This reads a number of bytes (readByteCount) from a file specified into the character buffer. The first parameter to the function is the file name (srcFileDesc). The second parameter is a pointer to the character buffer, declared previously in the program. The last parameter of the function is the size of the buffer.

The program returns the number of the bytes read (in this case, 4 bytes). The first if clause breaks out of the loop if a number of 0 or less is returned.

If the number of read bytes is 0, then all of the writing is done, and the loop breaks to write the next file. If the number of bytes read is less than 0, then an error has occurred and the program exits.

When the 4 bytes are read, it will write to them.The write function takes three arguments.The first is the file to write to, the second is the character buffer, and the third is the number of bytes to write (4 bytes). The function returns the number of bytes written.

If 0 bytes are written, then a write error has occurred, so the second if clause exits the program.

The while loop reads and copies the file, 4 bytes at a time, until the file is copied. When the copying is done, you can copy the directory of randomly generated mp3 files to your smartphone.

The copy and write routine are fairly efficient because they use file system calls in Linux.

Improving the code

This program is simple and it could be improved in terms of its user interface, and how flexible it is. You can implement a function that calculates the number of files in the source directory so you don't have to enter it manually, for instance. You can add options so you can pass the percentage and path non-interactively.nBut the code does what I need it to do, and it's a demonstration of the simple efficiency of the C programming language.

Use this C program I made on Linux to listen to your favorite songs on the go.

Image by:

Opensource.com

Programming Audio and music What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

OpenWrt, an open source alternative to firmware for home routers

Tue, 07/12/2022 - 15:00
OpenWrt, an open source alternative to firmware for home routers Stephan Avenwedde Tue, 07/12/2022 - 03:00 2 readers like this 2 readers like this

If you're reading this article from home, you are probably connected with a LTE/5G/DSL/WIFI router. Such devices are usually responsible to route packets between your local devices (smartphone, PC, TV, and so on) and provide access to the world wide web through a built-in modem. Your router at home has most likely a web-based interface for configuration purposes. Such interfaces are often oversimplified as they are made for casual users.

If you want more configuration options, but don't want to spend for a professional device you should take a look at an alternative firmware such as OpenWrt.

OpenWrt features

OpenWrt is a Linux-based, open source operating system targeting embedded network devices. It is mainly used as a replacement for the original firmware on home routers of all kinds. OpenWrt comes with all the useful features a good router should have like a DNS server (dnsmasq), Wifi access point and client functionality, PPP protocol for modem functionality and, unlike with the standard firmware, everything is fully configurable.

LuCI Web Interface

OpenWrt can be configured remotely by command line (SSH) or using LuCI, a GUI configuration interface. LuCI is a lightweight, extensible web GUI written in Lua, which enables an exact configuration of your device. Besides configuration, LuCI provides a lot of additional information like real time graphs, system logs, and network diagnostics.

Image by:

Stephan Avenwedde, CC BY-SA

There are some optional extensions available for LuCI to add even further configuration choices.

Writeable file system

Another highlight is the writeable filesystem. While the stock firmware is usually read-only, OpenWrt comes with a writeable filesystem thanks to a clever solution that combines OverlayFS with SquashFS and JFFS2 filesystems to allow installation of packages to enhance functionality. Find more information about the file system architecture in the OpenWrt documentation.

Extensions

OpenWrt has an associated package manager, opkg, which allows to install additional services. Some examples are an FTP server, a DLNA media server, an OpenVPN server, a Samba server to enable file sharing, or Asterisk (software to control telephone calls). Of course, some extensions require appropriate resources of the underlying hardware.

Motivation

You might wonder why you should try to replace a router manufacture's firmware, risking irreparable damage to your device and loss of warranty. If your device works the way you want, then you probably shouldn’t. Never touch a running system! But if you want to enhance functionality, or if your device is lacking configuration options, then you should check whether OpenWrt could be a remedy.

In my case, I wanted a travel router which I can place on an appropriate position when I’m on a campsite in order to get a good connection to the local Wifi access point. The router should connect itself as an ordinary client and broadcasts it’s own access point for my devices. This allows me to configure all my devices to connect with the routers access points and I only have to change the routers client connection when I’m somewhere else. Moreover, on some campsites you only get an access code for one single device, which I can enhance with this setup.

As my travel router, I choose the TP-Link TL-WR902AC for the following reasons:

  • Small
  • Two Wifi antennas
  • 5V power supply (USB)
  • Low power consumption
  • Cost effective (you get it for around $30)

To get an idea of the size, here it is next to a Raspberry Pi4:

Image by:

Stephan Avenwedde, CC BY-SA 4.0

Even though the router brings all hardware capabilities I demand, I relatively quickly found out that the default firmware don’t let me configure it the way I wanted. The router is mainly intended as an Wifi access point, which repeats an existing Wifi network or connects itself to the web over the onboard Ethernet interface. The default firmware is very limited for these use cases.

Fortunately, the router is capable of running OpenWrt, so I decided to replace the original firmware with it.

Installation

When your LTE/5G/DSL/WIFI router meets the minimum requirements, chances are high that it's possible to run OpenWrt on it. As the next step, you look in the hardware table and check whether your devices is listed as compatible, and which firmware package you have to choose. The page for the TP-Link TL-WR902AC also includes the installation instructions which describe how to flash the internal memory.

The process of flashing the firmware can vary between different devices, so I won’t go into detail on this. In a nutshell, I had to connect the device over  a TFTP server on a network interface with a certain IP address, rename the OpenWrt firmware file and then boot up the device considering pressing the reset button.

Configuration

Once flashing was successfully, your device should now boot up with the new firmware. It may take a bit longer now to boot up as OpenWrt comes with much more features compared to the default firmware.

OpenWrt acts as a DHCP server, so in order to begin with configuration, make a direct Ethernet connection between your PC and the router, and configure your PC’s Ethernet adapter as a DHCP client.

On Fedora Linux, to activate the DHCP client mode for your network adapter, first you have to find out the connection UUID by running:

$ nmcli connection show
NAME          UUID         TYPE      DEVICE
Wired Conn 1  7a96b...27a  ethernet  ens33
virbr0        360a0...673  bridge   virbr0
testwifi      2e865...ee8  wifi     --
virbr0        bd487...227  bridge   --
Wired Conn 2  16b23...7ba  ethernet --

Pick the UUID for the connection you want to modify and then run:

$ nmcli connection modify <UUID> ipv4.method auto

You can find more information about these commands in the Fedora Networking Wiki.

After you have a connection to your router, open a web browser and navigate to http://openwrt/. You should now see LuCI’s login manager:

Image by:

Stephan Avenwedde, CC BY-SA 4.0

Use root as the username, and leave the password field blank.

Configuring Wifi and routing

To configure your Wifi antennas, click on the Network menu and select Wireless.

Image by:

Stephan Avenwedde, CC BY-SA 4.0

On my device, the antenna radio0 on top operates in 2.4 GHz mode and is connected to the local access point called MOBILE-INTERNET. The antenna radio1 below operates at 5 GHz and has an associated access point with the SSID OpenWrt_AV. With a click of the Edit button, you can open the device configuration to decide whether the device belongs to the LAN or WWAN network. In my case, the access point OpenWrt_AV belongs to the LAN network and the client connection MOBILE-INTERNET belongs to the WWAN network.

Image by:

Stephan Avenwedde, CC BY-SA 4.0

Configured networks are listed under Network, in the Interfaces panel.

Image by:

Stephan Avenwedde, CC BY-SA 4.0

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

In order to get the functionality I want, network traffic must be routed between the LAN and the WWAN network. The routing can be configured in the Firewall section of the Network panel. I didn’t change anything here because, by default, the traffic is routed between the networks, and incoming packets (from WWAN to LAN) have to pass the firewall.

So all you need to know is whether an interface belongs to LAN or (W)WAN. This concept makes it relatively easy to configure, especially for beginners. You can find more information in OpenWrt’s basic networking guide.

Captive portals

Public Wifi access points are often protected by a captive portal where you have to enter an access code or similar. Usually, such portals show up when you are first connected to the access point and try to open an arbitrary web page. This mechanism is realized by the access point's DNS server.

By default, OpenWrt has a security feature activated that prevents connected clients from a DNS rebinding attack. OpenWrt’s rebind protection also prevents captive portals from being forwarded to clients, so you must disable rebind protection so you can reach captive portals. This option is in the DHCP and DNS panel of the Network menu.

Image by:

Stephan Avenwedde, CC BY-SA 4.0

Try OpenWrt

Thanks to an upgrade to OpenWrt, I got a flexible travel router based on commodity hardware. OpenWrt makes your router fully configurable and extensible and, thanks to the well-made web GUI, it's also appropriate for beginners. There are even a few select routers that ship with OpenWrt already installed. You are also able to enhance your router's functionality with lots of available packages. For example, I’m using the vsftp FTP server to host some movies and TV series on a connected USB stick. Take a look at the projects homepage, where you can find many reasons to switch to OpenWrt.

OpenWrt is a Linux-based, open source operating system targeting embedded network devices.

Image by:

Opensource.com

Networking What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 kinds of garbage collection for Java

Tue, 07/12/2022 - 15:00
7 kinds of garbage collection for Java Jayashree Hutt… Tue, 07/12/2022 - 03:00 2 readers like this 2 readers like this

An application written using programming languages like C and C++ requires you to program the destruction of objects in memory when they're no longer needed. The more your application grows, the great the probability that you'll overlook releasing unused objects. This leads to a memory leak and eventually the system memory gets used up, and at some point there's no further memory to allocate. This results in a situation where the application fails with an OutOfMemoryError. But in the case of Java, Garbage Collection (GC) happens automatically during application execution, so it alleviates the task of manual deallocation and possible memory leaks.

Garbage Collection isn't a single task. The Java Virtual Machine (JVM) has eight different kinds of Garbage Collection, and it's useful to understand each one's purpose and strength.

1. Serial GC Image by:

Opensource.com

A primitive implementation of GC using just a single thread. When Garbage Collection happens, it pauses the application (commonly known as a "stop the world" event.) This is suitable for applications that can withstand small pauses. Garbage Collection has a small footprint, so this is the preferred GC type for embedded applications. This Garbage Collection style can be enabled at runtime:

$ java -XX:+UseSerialGC2. Parallel GC Image by:

Opensource.com

Like Serial GC, this also uses a "stop the world" method. That means that while GC is happening, application threads are paused. But in this case, there are multiple threads performing GC operation. This type of GC is suitable for applications with medium to large data sets running in a multithreaded and multiprocessor environment.

This is the default GC in JVM, and is also known as the Throughput Collector. Various GC parameters, like throughput, pause time, number of threads, and footprint, can be tuned with suitable JVM flags:

  • Number of threads: -XX:ParallelGCThreads=
  • Pause time: -XX:MaxGCPauseMillis=
  • Throughput (time spent for GC compared to actual application execution): -XX:GCTimeRatio=
  • Maximum heap footprint: -Xmx
  • Parallel GC can be explicitly enabled: java -XX:+UseParallelGC. With this option, minor GC in the young generation is done with multiple threads, but GC and compaction is done with a single thread in the old generation.

There's also a version of Parallel GC called Parallel Old GC, which uses multiple threads for both young and old generations:

$ java -XX:+UseParallelOldGC3. Concurrent Mark Sweep (CMS) Image by:

Opensource.com

Concurrent Mark Sweep (CMS) garbage collection is run alongside an application. It uses multiple threads for both minor and major GC. Compaction for live objects isn't performed in CMS GC after deleting the unused objects, so the time paused is less than in other methods. This GC runs concurrently with the application, which slows the response time of the application. This is suitable for applications with low pause time. This GC was deprecated in Java 8u, and completely removed from 14u onwards. If you're still using a Java version that has it, though, you can enable it with:

$ java -XX:+UseConcMarkSweepGC

In the case of CMS GC, the application is paused twice. It's paused first when it marks a live object that's directly reachable. This pause is known as the initial-mark. It's paused a second time at the end of the CMS GC phase, to account for the objects that were missed during the concurrent cycle, when application threads updated the objects after CMS GC were completed. This is known as the remark phase.

4. G1 (Garbage First) GC Image by:

Opensource.com

More on Java What is enterprise Java programming? Red Hat build of OpenJDK Java cheat sheet Free online course: Developing cloud-native applications with microservices Fresh Java articles

Garbage first (G1) was meant to replace CMS. G1 GC is parallel, concurrent, and incrementally compacting, with low pause-time. G1 uses a different memory layout than CMS, dividing the heap memory into equal sized regions. G1 triggers a global mark phase with multiple threads. After the mark phase is complete, G1 knows which region might be mostly empty and chooses that region for a sweep/deletion phase first.

In the case of G1, an object that's more than half a region size is considered a "humongous object." These objects are placed in the Old generation, in a region appropriately called the humongous region. To enable G1:

$ java -XX:+UseG1GC5. Epsilon GC

This GC was introduced in 11u and is a no-op (do nothing) GC. Epsilon just manages memory allocation. It doesn’t do any actual memory reclamation. Epsilon is intended only when you know the exact memory footprint of your application, and knows that it is garbage collection free.

$ java -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC6. Shenandoah

Shenandoah was introduced in JDK 12, and is a CPU intensive GC. It performs compaction, deletes unused objects, and release free space to the OS immediately. All of this happens in parallel with the application thread itself. To enable Shenandoah:

$ java -XX:+UnlockExperimentalVMOptions \ -XX:+UseShenandoahGC7. ZGC

ZGC is designed for applications that have low latency requirements and use large heaps. ZGC allows a Java application to continue running while it performs all garbage collection operations. ZGC was introduced in JDK 11u and improved in JDK 12. Both Shenandoah and ZGC have been moved out of the experimental stage as of JDK 15. To enable ZGC:

$ java -XX:+UnlockExperimentalVMOptions -XX:+UseZGCFlexible garbage collection

Java provides flexibility for memory management. It's useful to get familiar with the different methods available so you can choose what's best for the application you're developing or running.

Learn about the choices you have in Java for memory management.

Image by:

Photo by Nathan Dumlao on Unsplash

Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

An open conversation about open societies

Mon, 07/11/2022 - 15:00
An open conversation about open societies Bryan Behrenshausen Mon, 07/11/2022 - 03:00 2 readers like this 2 readers like this

Throughout the course of human history, why have some societies endured and evolved while others have struggled and disappeared? According to author Johan Norberg, being "open" might have something to do with it.

Learn about open organizations Download resources Join the community What is an open organization? How open is your organization?

Norberg is the author of Open: The Story of Human Progress, a book several members of the Open Organization community found so compelling that we decided to publish a four-part series of reviews on it.

Happily, we were recently able to sit down with the author and continue our discussion. We wondered exactly what "being open" is in the context of global governance and international relations today. And how might we locate guidelines and approaches that will move everyone toward a greater good for the entire global community?

We recorded our conversation, are delighted to share it, and hope you find it as insightful as we did.

Check out the articles below to read the series.

Watch our interview with Johan Norberg, author of Open: The Story of Human Progress.

Image by:

Opensource.com

The Open Organization What to read next Open exchange, open doors, open minds: A recipe for global progress Making the case for openness as the engine of human progress 4 questions about the essence of openness The path to an open world begins with inclusivity This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 2434 points Tokyo, Japan

Ron McFarland has been working in Japan for over 40 years, and he's spent more than 30 of them in international sales, sales management training, and expanding sales worldwide. He's worked in or been to more than 80 countries. Over the most recent 17 years, Ron had established distributors in the United States and throughout Europe for a Tokyo-headquartered, Japanese hardware cutting tool manufacturer. More recently, he's begun giving seminars in English and Japanese to people interested in his overseas travels and expanding business overseas. You can find him on LinkedIn.

| Follow RonmcfarlMc Open Source Champion Author Open Organization Ambassador Contributor Club 2 Comments Register or Login to post a comment. Ron McFarland | July 11, 2022

In my article the path to an open world begins with inclusivity, I mention six steps to promote inclusivity in societies, namely, 1-Recognition, 2-Respect, 3-Understanding, 4-Tolerance, 5-Optimism, and 6-Patience. In this discussion two other concerns were mentioned:

The issue of overcoming fear of others is another concern that should be address.

Furthermore, the issue of values also came up. Based on our personal values, there are societies or even communities that we don’t want to be a part of. This could be included in those steps I write about.

Bryan Behrenshausen | July 11, 2022

I really enjoyed this conversation and hope we can do it again some time!

Why Agile coaches need internal cooperation

Mon, 07/11/2022 - 15:00
Why Agile coaches need internal cooperation Kelsea Zhang Mon, 07/11/2022 - 03:00 3 readers like this 3 readers like this

If you're an Agile coach, you probably seek to inspire and empower others as an external member of your team or department. However, many Agile coaches overlook the importance of internal cooperation. That's not necessarily a term you are familiar with, so allow me to explain.

What is internal cooperation?

As an Agile coach, you don't work alone. You try to find a partner in the team you're taking care of. This partner is expected to:

  • Undertake all or most of the Agile transformation in the future.
  • Find all possible opportunities for systematic improvement and team optimization.
  • Be self-motivated.
  • Not be managed by you; you delegate your enthusiasm and vision to them.

Of course, maybe you don't need such a person because, theoretically speaking, everyone in the team is your ideal candidate, and everyone is self-driven. Or maybe your whole team will magically become what you want it to be overnight.

Reality check: most of the time, you need a partner, an inside agent. Somebody to keep the spirit of Agile alive, whether you're there to encourage it or not.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles Internal cooperation is required

Getting buy-in from the team you are coaching isn't a luxury; it's a requirement. If you're the only Agile practitioner on your team, then your team isn't Agile! So how do you cultivate this internal cooperation?

Clarify responsibility

Being Agile is supposed to be a team effort. The beneficiary is the team itself, but the team must also bear the burden of transformation. An Agile coach is meant to be inspiring and empowering, but the change doesn't happen in just one person. That's why teams must learn to consider and solve problems on their own. A team must have its own engine (your Agile partner is such an engine) rather than relying on the external force of the Agile coach. It's the engines that want to solve problems, and with the help of Agile coaches, their abilities and ways of thinking can be enriched and improved.

It's best to have an engine from the beginning, but that's not always possible. The earlier, the better, so look for allies from the start.

Know the team

When you find a partner, you gain someone who understands the team's situation better than you do. A good partner knows the team from the inside and communicates with it on a level you cannot. No matter how good you are as an Agile coach, you must recognize that an excellent Agile partner has a unique advantage in "localization."

The best approach is not An Agile coach makes a customized implementation plan for the team, and then the team is responsible for execution. In my opinion, with the support of the Agile coach, the Agile partner should work with the team to make plans that best fit its needs. Next, try to implement those plans with frequent feedback and keep adjusting them as needed.

You continue to observe progress, whether the team members falter in Agile principles, and give them support at the right moments. Of course, when there's something wrong, you often want to stay silent, let the team hit a wall, and learn from their setbacks. Other times, stepping in to provide guidance is the right thing.

[ Related read: Agile adoption: 6 strategic steps for IT leaders ]

Is an Agile coach still necessary?

In a word: Absolutely!

Agile is a team effort. Everyone must collaborate to find processes that work. Solutions are often sparked by the collision of ideas between the Agile coach and the partner. Then the partner can accurately get how an Agile theory is applied in the daily work. The partner understands the essence of Agile theories through the solutions.

As an Agile coach, you must have a solid theoretical foundation and the ability to apply that theory to specific scenarios. On the surface, you take charge of the theory while your Agile partner is responsible for the practice. However, an Agile coach must not be an armchair strategist, and teams aren't supposed to assume that the Agile coach is a theorist. In fact, an Agile coach must consciously let go of the practice part so the Agile partner can take over.

The significance of accompanying a team is not supposed to be pushing the team to move passively toward the Agile coach's vision. The amount of guidance required from you will fluctuate over time, but it shouldn't and can't last forever.

Find an Agile partner

How do you find your Agile partner? First of all, observe the team you are coaching and notice anyone who is in charge of continuous improvement, whether it's their defined job role or not. That person is your Agile partner.

If there's nobody like that yet, you must cultivate one. Be sure to choose someone with a good sense of project management. I have observed that team leaders or project managers who perform well in the traditional development model may not be good candidates in the Agile environment. In an Agile management model, you must have an open mind, a sense of continuous pursuit of excellence, a flexible approach, extensive knowledge, and strong self-motivation.

Be Agile together

Don't be shy about bringing on a partner to help you with your work and communication. Instead, find willing partners, and work together to make your organization an Agile one.

This article is translated from Xu Dongwei's Blog and is republished with permission.

An Agile coach is only as successful as their Agile partner. Here's how to foster internal cooperation and create an Agile team.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

DevOps Agile Careers What to read next 5 agile mistakes I've made and how to solve them This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Meet Free Software Foundation Executive Director Zoë Kooyman

Fri, 07/08/2022 - 15:00
Meet Free Software Foundation Executive Director Zoë Kooyman Seth Kenlon Fri, 07/08/2022 - 03:00 3 readers like this 3 readers like this

The Free Software Foundation (FSF) started promoting the idea of sharing code way back in 1985, and since then it's defended the rights of computer users and developers. The FSF says that the terms "open" and "closed" are not effective words when classifying software, and instead considers programs either freedom-respecting ("free" or "libre") or freedom-trampling ("non-free" or "proprietary"). Whatever terminology you use, the imperative is that computers must belong, part and parcel, to the users, and not to the corporations that owns the software the computers run. This is why the GNU Project, and the Linux kernel, Freedesktop.org, and so many other open source projects are so important.

Recently, the FSF has acquired a new executive director, Zoë Kooyman. I met Zoë in 2019 at an All Things Open conference. She wasn't yet the executive director of the FSF at that time, of course, but was managing their growing list of major events, including LibrePlanet. I was captivated by her energy and sincerity as she introduced me to a seemingly nonstop roster of people creating the freedom-respecting software I used on a daily basis. I had stumbled into an FSF meetup and ended up hanging out with the people who were actively defining the way I lived my digital life. These were the people who ensured that I had what Zoë Kooyman and the FSF calls the four essential freedoms:

  • The freedom to run the program as you wish, for any purpose (freedom 0).
  • The freedom to study how the program works and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help others (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

When I heard about Zoë's appointment as executive director, I emailed her for an interview and she was kind enough to take some time out of her very busy schedule to have a chat.

Seth Kenlon: You're the executive director of the FSF! How did you get here?

Zoë Kooyman: In my working life, I started out as an event organizer, traveling the world while producing some of the world's biggest music shows. Working with so many different cultures in ever-changing locations is exciting, as is making all the different elements of production come together, whether that's the show, technique, or the other live elements. It's a juggling game to have everything fall into place at the right moment. I spent a lot of time living and working in different countries, and learning a lot about organization and communication thanks to this work. I also studied, and worked with different forms of media, how they are experienced, and their relationship with society.

It was in university that I first learned about copyleft. About how we can use existing structures to our benefit, and drive change. It was also then that media (as well as the Internet, and software) landscapes started changing rapidly with encroachments on freedom as a consequence. Moving to the US changed things for me. In the US, I developed a much stronger sense of urgency for matters of social responsibility, and so I decided to act on it. I was thankful to John Sullivan (the FSF executive director at that time) for hiring me based on what I knew about free software and my organizing experience, and allowed me to bring the two together.

Seth: How did you get into Free Software?

Zoë: We tend to expect technical people to be the main people affected by free software, but free software is a movement to defend freedom for anyone using a computer. Actually, software freedom affects members of marginalized communities who are unable to have regular access to a computer. Software shapes their lives as well.

What the concept of copyleft, as well as the GNU Project, has achieved is exceptional. To truly observe the direction society was heading in, and say, "It doesn't have to be this way. We can take matters in our own hands." That changed my outlook on life early on. I started working on the idea of using already existing materials and reintroducing it to different subcultures. In the entertainment industry you see this all the time, the inspiration from and building on other people's work, and the result is a reflection of the time we live in, as well as a nod to history. True progression cannot happen without that freedom.

As a commentary on copyright for film, I spent time working with the National Film Institute in the Netherlands to create a compilation of "orphaned footage" that was shown at a large scale dance event for thousands of young people in an area with a 170m panoramic screen and a live DJ playing to it. They have continued to play it regularly at events like the Dutch Museumnacht.

Not being a technical person, I expressed these ideas culturally, but over the years, I was confronted with the ideas of free software more and more, and I realized that with the continued integration of software into our lives (and sometimes our bodies), the fight for free software is becoming more relevant every day. In a world where proprietary software prevails, our society will progress in a way that favors profit and the progression of the few over the freedom of many. Without free software, there are so many aspects of life, so many important social causes that cannot truly succeed.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Seth: When did you start with the FSF?

Zoë: Early 2019, one week before the last in-person edition of LibrePlanet.

Seth: What attracted you to the Executive Director role?

Zoë: The FSF is just one organization that is trying to move the needle towards a more equitable, more collaborative, and more software-literate society, but it has been at the core of the movement for a long time. Society is changing rapidly, and most people are not being properly prepared in how to deal with the digital building blocks of today's society i.e. software. This is all incredibly important work, and there are not enough people doing this work. It is important to have an organization that can handle the different challenges that lay ahead.

The executive director role, is in a way, merely a facilitating role for the staff and the community to be able to make significant changes toward free software. I believe it is vitally important that we continue to spread the free software message, and with the team we have at the FSF, I believe we can make a real difference. I believe I can use the lessons of working with so many different cultures and people, organizing really challenging projects globally, to help get the best out of all of us. The support I received from staff, management, the community, and the board in this decision, convinced me it was a good decision to take this on.

Seth: What do you see as the biggest challenges in software freedom today? What should the FSF's role be in addressing those challenges?

Zoë: As software has integrated itself more and more into the basic fabric of society, it's also become more invisible. Software is now so widespread, and we've been conditioned to overlook it. We focus on what a program can do, not how it does it, let alone if it respects you as a user. And in the meantime, software is proliferating more rapidly than ever before. If people don't understand the fabric out of which a program is made, and all they do, all day, is use these programs, then how can we even begin to explain to them that they are being treated unjustly?

The FSF's role is to bring every conversation back to this logic of user freedom, to remind us that the tools we use are not benign. Education and government adoption are important focus areas for that reason. If we get people to focus on the issue of software freedom in those areas, we will truly make a difference. Education will help make sure future generations have a chance at freedom, and free software in government is about protecting citizens from unjust influences through proprietary software (maintaining digital sovereignty).

We can show people that today's society is teaching us a faulty lesson: that it is normal to be subjected to encroachments on your freedoms for reasons "too complex to understand." If you want convenience, connection, or just to do your job, you need to trust these organizations and abide by their will. That is not true. We have an entire community of people who believe we can have a society that doesn't ask you to surrender your freedoms to function in it. And we have this legal framework that supports our ideas. People of all backgrounds and skill levels join our conversations daily, more and more people care about their freedom, and everyone has their own reasons. We learn new things every day about how we can protect ourselves and others, and I look forward to a freer future.

Find out what the Free Software Foundation (FSF) is all about.

Image by:

Photo by Rob Tiller, CC BY-SA 4.0

What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Check disk usage in Linux

Thu, 07/07/2022 - 15:00
Check disk usage in Linux Don Watkins Thu, 07/07/2022 - 03:00 1 reader likes this 1 reader likes this

Knowing how much of your disk is being used by your files is an important consideration, no matter how much storage you have. My laptop has a relatively small 250GB NVME drive. That's okay most of the time, but I began to explore gaming on Linux a couple of years ago. Installing Steam and a few games can make storage management more critical.

The du command

The easiest way to examine what's left for storage on your disk drive is the du command. This command line utility estimates file space usage. Like all Linux tools, du is very powerful, but knowing how to use it for your particular needs is helpful. I always consult the man page for any utility. This specific tool has several switches to give you the best possible snapshot of file storage and how much space they consume on your system.

There are many options for the du command. Here are some of the common ones:

  • -a - write counts for all files and not just directories
  • --apparent-size - prints apparent sizes rather than disk usage
  • -h - human-readable format
  • -b - bytes
  • -c -grand total
  • -k - block size
  • -m - size in megabytes

Be sure to check the du man page for a complete listing.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Display all files

The first option you could choose is du -a. It provides a readout of all files on your system and the directories they are stored in. This command lets me know I've got 11555168 bytes stored in my home directory. Using du -a provides a quick recursive look at my storage system. What if I want a more meaningful number, and I want to drill down into the directories to see where the big files are on my system?

I think there are some big files in my Downloads directory, so I enter du -a /home/don/Downloads to get a good look at that Downloads directory.

$ du -a ~/Downloads
4923    ./UNIX_Driver_5-0/UNIX Driver 50
4923    ./UNIX_Driver_5-0
20     ./epel-release-latest-9.noarch.rpm
12    ./rpmfusion-free-release-9.noarch.rpm
2256    ./PZO9297 000 Cover.pdf
8    ./pc.md
2644    ./geckodriver-v0.31.0-linux64.tar.gz
466468  

The numbers on the far left are the file sizes in bytes. I want something more helpful to me so I add the switch for the human-readable format to my du -h /home/don/Downloads command. The result is 4.8 G(igabytes) which is a more useful number format for me.

$ du -ah ~/Downloads
4.9M    ./UNIX_Driver_5-0/UNIX Driver 50
4.9M    ./UNIX_Driver_5-0
20K    ./epel-release-latest-9.noarch.rpm
12K    ./rpmfusion-free-release-9.noarch.rpm
2.2M    ./PZO9297 000 Cover.pdf
8.0K    ./pc.md
2.6M    ./geckodriver-v0.31.0-linux64.tar.gz
456M    .

As with most Linux commands, you can combine options. To look at your Downloads directory in a human-readable format, use the du -ah ~/Downloads command.

[ Read also: 5 Linux commands to check free disk space ]

Grand total

The -c option provides a grand total for disk usage at the last line. I can use du -ch /home/don to display every file and directory in my home directory. There is a lot of information, and I really just want what is at the end, so I will pipe the disk usage command to tail. The command is du -ch /home/don | tail.

Image by:

(Don Watkins, CC BY-SA 4.0)

The ncdu command

Another option for Linux users interested in what is stored on their drive is the ncdu command. The command stands for NCurses Disk Usage. Depending on your Linux distribution, you may need to download and install it.

On Linux Mint, Elementary, Pop_OS!, and other Debian-based distributions:

$ sudo apt install ncdu

On Fedora, Mageia, and CentOS:

$ sudo dnf install ncdu

On Arch, Manjaro, and similar:

$ sudo pacman -S ncdu

Once installed, you can use ncdu to analyze your filesystem. Here is a sample output after issuing ncdu inside my home directory. The man page for ncdu states that "ncdu (NCurses Disk Usage) is a curses-based version of the well-known du, and provides a fast way to see what directories are using your disk space."

Image by:

(Don Watkins, CC BY-SA 4.0)

I can use the arrow keys to navigate up and down and press the Enter key to enter a directory. An interesting note is that du reported total disk usage in my home directory as 12GB, and ncdu reports that I have total disk usage of 11GB. You can find more information in the ncdu man page.

You can explore a particular directory by pointing ncdu to that directory. For example, ncdu /home/don/Downloads.

Image by:

(Don Watkins, CC BY-SA 4.0)

Press the ? key to display the Help menu

Image by:

(Don Watkins, CC BY-SA 4.0)

Wrap up

The du and ncdu commands provide two different views of the same information, making it easy to keep track of what's stored on your computer.

If you're not comfortable in the terminal or just looking for yet another view of this kind of information, check out the GNOME Disk Usage Analyzer. You can easily install and use it if it's not already on your system. Check your distribution for baobab and install it if you'd like to experiment.

The du and ncdu commands provide two different views of the same information, making it easy to keep track of what's stored on your computer.

Linux Command line What to read next Replace du with dust on Linux Check used disk space on Linux with du Check free disk space in Linux with ncdu 5 Linux commands to check free disk space This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 3 Comments Register or Login to post a comment. Greg Pittman | July 7, 2022

Your second example doesn't show the right command. It should have been:

du -ah ~/Downloads

I never use du without the -h option. Otherwise, the results are too cryptic to understand.

Don Watkins | July 7, 2022

Good catch Greg! I'll mention that to the editors.

In reply to by Greg Pittman

Robert Harker | July 12, 2022

The -h flag is new to me. I like it. Try adding this alias to your .bashrc file:
alias du='du -h'

I learned the -s flag for summary rather than the -c flag. Easier to remember.

Some du commands I use:

du -sh *
Summarize disk usage for directories and files in the current directory.

du -sh * | sort -n
Sort the size of the directories or files in the current directory.
-n sorts based on numerical value not alphanumeric. -rn reverse the sort order.

du -sh * | sort -n | tail -5
Show the 5 largest directories or files in the current directory.
This is just a summary of the top level files and directories in the current directory.

find . -type f | xargs -d '\n' du -sh * | sort -n | tail -5
Find the 5 largest files in the current directory.
Useful for finding unexpected huge files you were not aware of.
The -d '\n' argument to xargs tells xargs to break arguments on newlines. It avoids problems with filenames with spaces in them.

du was one of the first UNIX commands I learned 45 years ago. An oldie but goodie.

3 steps to create an awesome UX in a CLI application

Wed, 07/06/2022 - 15:00
3 steps to create an awesome UX in a CLI application Noaa Barki Wed, 07/06/2022 - 03:00 Register or Login to like Register or Login to like

As I was sitting in a meeting room, speaking with one of my teammates, our manager walked in with the rest of the dev team. The door slammed shut and our manager revealed that he had a big announcement. What unfolded before our eyes was the next project we were going to develop—an open source CLI (command line interface) application.

In this article, I'd like to share what I learned during our development process, and specifically what I wish I had known before we began developing Datree's CLI. Perhaps the next person can use these tips to create a great CLI application faster.

My name is Noaa Barki. I've been a full-stack developer for over six years, and I'll let you in on a little secret—I have a superpower: My interest and expertise are evenly split between back-end and front-end development. I simply cannot choose one without the other.

So when my manager revealed the news about our future CLI, the back-end developer-me got very excited because I'd never created a CLI (or any open source project, for that matter). A few seconds later, the front-end developer-me started to wonder, How can I build an awesome user experience in a CLI application?

Since Datree CLI helps engineers prevent Kubernetes misconfigurations, my users are primarily DevOps and engineers, so I interviewed all my DevOps friends and searched online about the general DevOps persona.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Here are the steps I came up with:

  1. Design the commands
  2. Design the UI
  3. Provide backward compatibility
Step 1: Design the commands

Once you have completed the strategic process, it's time to design the commands. I think about CLI applications like magic boxes—they hold great features that work in a magical way, but only if you know how to use them. That means a CLI application must be intuitive and easy to use.

My top six principles for CLI commands

Here are my top six principles and best practices for designing and developing CLI commands:

1. Input flag vs. arguments

Use arguments for required fields, and for everything else, use flags. Take, for example, the datree test command, which prints the policy results, and say that you want to enable the user to print the output into a specific file. If you use datree test {pattern} {output-file}, it is difficult to understand from reading the executed command which argument is the pattern and which argument is the file path.

For example, this occurs with the following command: datree test **/* **.YAML. However, if you use datree test {pattern} -output {output-path}, it becomes much clearer.

Note: Reports show that most users find flags to be clearer.

2. Enum-style vs. Boolean flags

It's preferable to use an enum-style flag over a Boolean-style flag because then you (a developer and user) need to think about all combinations of the presence or absence of the flags in the command. An enum-style flag is a flag that assumes a value. Enum-style flags make it much easier to implement tab completion.

3. Use familiar language

Remember that a CLI is built more for humans than machines. Pick real-world language for your commands and descriptions.

4. Naming conventions

Use CLI commands that are named in a SINGLE form and VERB-NOUN format. This allows the command to be read like an imperative or request, for example: Computer, start app!

Minimize with the total number of commands you use, and don't rush to introduce new verbs to new commands. This makes it easier for users to remember command names.

5. Prompts

Provide a bypass to the prompt option. The user cannot script the command if prompting is required to complete it. To avoid frustrating users, a simple --output flag can be a valuable solution to allow the user to parse the output and script the CLI.

6. Command descriptions

The root command should list all the commands with their descriptions. Provide a command description to all commands (or do not offer descriptions at all), choose the screen width you want it to fit into (generally an 80-character width), and begin with a lowercase character. Also, don't end with a period to avoid unclear line breaks or lost periods.

Step 2: Design the UI

Now you have a solid definition for your users. You have also planned and designed your commands and outputs. Next it's time to think about making the CLI application aesthetic, accessible, and easy to learn.

If you think about it, almost every app must deal with UX (user experience) challenges during the users' onboarding and journey. The how part of UX for web applications is much more obvious because you have many component libraries (such as material-UI and bootstrap) that make it easier to adopt standard style guides and functionality flows. But what about CLI applications? Are there any design conventions for CLI interfaces? How can you create an aesthetic design of the CLI functionality that is also accessible? Is there any way to make the CLI UI as friendly as a GUI?

Top three UI and UX best practices for CLI applications

1. Use colors

Colors are a great way to attract your user's eyes and help them read commands and outputs much faster. The most recommended font colors are magenta, cyan, blue, green, and gray, but don't forget that background colors can provide more variety. I encourage you to use yellow and red colors but remember that these are typically saved for errors and warnings.

2. Input-output consistency

Be consistent with inputs and outputs across the application; this encourages usability and allows the user to learn how to interact with new commands quickly.

3. Ordering arguments

Choose an argument's position based on how it correlates with the command's action. Consider NestJS's generate command nest generate {schematic} {name}, which needs schematic and name as arguments. Notice that the action generate refers directly to the schematic, not name, so it makes more sense for schematic to be the first arg.

Step 3: Provide backward compatibility

Avoid modifying the output. Now that you know how to create a perfect CLI application, don't forget to keep your users in the back of your mind, especially when enabling scripting the CLI. Remember that any change in the command's outputs may break users' current scripts; therefore, avoid modifying the output.

Wrap up

Creating a new CLI is exciting and challenging, and doing so with a helpful and easy UX adds to the challenge. My experience shows that three key factors go into a successful UX for a CLI project:

  1. Design the commands
  2. Design the UI
  3. Provide backward compatibility

Each of these phases has its own components that support the logic and make the lives of your users easier.

I hope these concepts are useful and that you have the opportunity to apply them in your next project.

Here is what I've learned to be the key factors that go into a successful user experience for a CLI project.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Command line DevOps Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why I love Tig for visualizing my Git workflows

Tue, 07/05/2022 - 15:00
Why I love Tig for visualizing my Git workflows Sumantro Mukherjee Tue, 07/05/2022 - 03:00 Register or Login to like Register or Login to like

If you find navigating your Git repositories frustratingly complex, have I got the tool for you. Meet Tig.

Tig is an ncurses-based text-mode interface for Git that allows you to browse changes in a Git repository. It also acts as a pager for the output of various Git commands. I use this tool to give me a good idea of what’s been changed in which commit by whom, the latest commit merged, and so much more. Try it for yourself, starting with this brief tutorial.

Installing Tig

On Linux, you can install Tig using your package manager. For instance, on Fedora and Mageia:

$ sudo dnf install tig

On Debian, Linux Mint, Elementary, Pop_OS, and other Debian-based distributions:

$ sud apt install tig

On macOS, use MacPorts or Homebrew. Tig’s complete installation guide can be found in the Tig Manual.

Using Tig

Tig provides an interactive view of common Git output. For instance, with Git you can view all refs with the command git show-ref:

$ git show-ref
98b108... refs/heads/master
6dae95... refs/remotes/origin/1010-internal-share-partition-format-reflexion
84e1f8... refs/remotes/origin/1015-add-libretro-openlara
e62c7c... refs/remotes/origin/1016-add-support-for-retroarch-project-cd
1c29a8... refs/remotes/origin/1066-add-libretro-mess
ffd3f53... refs/remotes/origin/1155-automatically-generate-assets-for-external-installers
ab4d14... refs/remotes/origin/1160-release-on-bare-metal-servers
28baa9... refs/remotes/origin/1180-ipega-pg-9118
8dff1d... refs/remotes/origin/1181-add-libretro-dosbox-core-s
81a7fe... refs/remotes/origin/1189-allow-manual-build-on-master
[...]

With Tig, you can get that information and much more in a scrollable list, plus keyboard shortcuts to open additional views with details about each ref.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Pager mode

Tig enters pager mode when input is provided to stdin (standard input). When the show subcommand is specified and the --stdin option is given, stdin is assumed to be a list of commit IDs, which is forwarded to git-show:

$ git rev-list --author=sumantrom HEAD | tig show –stdinLog and diff views

When you're in Tig's log view, you can press the d key on your keyboard to display diffs. This displays the files changed in the commit and the lines that were removed and added.

Interactive Git data

Tig is an excellent addition to Git. It makes it easy to review your Git repository by encouraging you to explore the logs without having to construct long and sometimes complex queries.

Add Tig to your Git toolkit today!

Tig is an excellent tool for reviewing your Git repository by encouraging you to explore the logs without having to construct long and sometimes complex queries.

Image by:

opensource.com

Git What to read next How to use Tig to browse Git logs 6 best practices for managing Git repos This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Manage your files in your Linux terminal with ranger

Mon, 07/04/2022 - 15:00
Manage your files in your Linux terminal with ranger Sumantro Mukherjee Mon, 07/04/2022 - 03:00 Register or Login to like Register or Login to like

The most basic way to look at your files and folders is to use the commands ls and ll. But sometimes, I want to see not just the file metadata but also the contents of a file at a glance. For that, I use ranger.

If you love working out of your console and using Vim or Vi, and you don’t want to leave your terminal for any reason, ranger is your new best friend. Ranger is a minimal file manager that allows you not only to navigate through the files but also to preview them. Ranger comes bundled with rifle, a file executor that can efficiently choose programs that work with a given file type.

Installing ranger on Linux

Ranger can be installed in Fedora or any RPM-based distro by running

$ sudo dnf install ranger

Ranger is also available for other distros and macOS.

Using ranger for the first time

As a user, you can start ranger by simply typing $ ranger on your favorite terminal. The arrow keys give way to the navigation. This screenshot is an excellent example of how I can preview the code of the config.example file stored in Kernel-tests.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Picking any file and hitting F4 opens up your default editor and lets you edit the files right away!

What about images and videos?

Using rifle with ranger lets you quickly find the program associated with a given file. Hovering over an image and then trying to open it is very simple; just hit Enter. Here’s how that looks:

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Hitting i on an image file will give the user all the EXIF data. Hitting Shift+Enter will open the PDF file.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

The same key combo will open and start playing videos in the system's default video player that supports the codec. The example below is an mp4 video, which plays just fine on VLC.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles File ops

The following key bindings work well unless otherwise configured by the Vim user.

j: Move down
k: Move up
h: Move to parent directory
gg: Go to the top of the list
i: Preview file
r: Open file
zh: View hidden files
cw: Rename current file
yy: Yank (copy) file
dd: Cut file
pp: Paste file
u: Undo
z: Change settings
dD: Delete file

Console commands

Sometimes I have a folder that contains screenshots of a particular software when I am drafting articles. Selecting or marking files by hitting Space and then typing :bulkrename helps me move all the weird timestamps to, for example, lorax1, lorax2 , and so on. An example is below:

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Other useful console commands include:

:openwith: Open a select file with a program of your choice
:touch FILENAME: Create a file
:mkdir FILENAME: Create a directory
:shell : Run a command in shell
:delete: Delete files

Will it work in tty2/3/4?

As someone who works in quality assurance (QA), I've found that searching for logs and reading them has never been easier. Even when my Gnome Display Manager crashes, I can switch over to my tty2, log in with my username and password, and start ranger with superuser permission, and then I am all sorted to explore!

Ranger is a great tool for working with files without ever having to leave the terminal. Ranger is minimal and customizable, so give it go!

Try this lightweight open source tool to preview files without leaving the terminal.

Linux What to read next How I use the attr command with my Linux filesystem This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why I switched from Apple Music to Jellyfin and Raspberry Pi

Fri, 07/01/2022 - 15:00
Why I switched from Apple Music to Jellyfin and Raspberry Pi DJ Billings Fri, 07/01/2022 - 03:00 Register or Login to like Register or Login to like

One day earlier this year, I looked up a song in my Mac's music library that's been there since 2001. I received an error message, "This song is not currently available in your country or region." I thought this might be just a glitch on my iPhone, so I tried the desktop app. No go. I opened up my media drive, and there was the music file. To check if it played, I hit the spacebar, and it began to play immediately. Hrmph. I have the file, I thought. Why won't the Music app play it?

Image by:

(DJ Billings, CC BY-SA 40)

After some digging, I found other users with similar issues. To sum up, it seems that Apple decided that it owned some of my songs, even though I ripped this particular song to an MP3 from my own CD in the late 1990s.

To be clear, I'm not an Apple Music subscriber. I'm referring to the free "music" app that used to be called iTunes. I gave Apple Music a go when it first launched but quickly abandoned it. They decided to replace my previously owned songs with their DRM versions. In fact, I believe that's where my messed-up music troubles began. Since then, I've been bombarded with pushy Apple notifications trying to steer me back into becoming an Apple Music subscriber.

The sales notifications were annoying, but this suddenly unplayable song was unacceptable. I knew there had to be a better way to manage my music, one that put me in control of the music and movie files I already owned.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Searching for a new open source media solution

After this incident, I naturally took to social media to air my grievances. I also made a short list of needs I had for what I thought was the ideal solution:

  • It needs to be open source and run on Linux.
  • I want to run it on my own server, if possible.
  • It should be free (as in beer) if possible.
  • I want the ability to control how the media is organized.
  • I want to be able to watch my movies on my TV as well as listen to music.
  • It should work from home (WiFi) and over the internet.
  • It should be cross-platform accessible (Linux, Mac OS, Windows, Android, iOS).

A tall order, I know. I wasn't sure I'd get everything I wanted, but I thought aiming for the stars was better than settling for something quick and easy. A few people suggested Jellyfin, so I decided to check it out, but without much optimism considering the amount of rabbit holes I'd already been down.

What I discovered was unbelievable. Jellyfin fulfilled every item on my list. Better still, I found that I could use it with my Raspberry Pi. I jumped onboard the Jellyfin train and haven't looked back.

Raspberry Pi and Jellyfin are the perfect combination

I will describe what I did, but this is not intended to be a complete tutorial. Believe me when I say that if I can do it, so can you.

Raspberry Pi 4

I used a Raspberry Pi 4 Model B with 4GB of RAM. The SD card is 128GB, which is more than I need. The Pi 4 has WiFi but it's connected to my router using ethernet, so there's less lag.

One of the things I love about the Raspberry Pi is the ability to swap out the entire OS and storage by slipping in a new SD card. You can switch back in a few seconds if the OS doesn't suit you.

Western Digital Elements 2 TB external SSD

Since all of my media won't fit on a 128GB SD card, an external drive was essential. I also like having my media on a drive separate from my OS. I previously used a 2TB external HD from Seagate that worked fine. I was trying to keep my budget low, but I also wanted an SSD, one with a small footprint this time. The Western Digital drive is tiny, fast, and perfect. To work with the Raspberry Pi, I had to format the drive as exFAT and add a package to help the Pi mount it.

Jellyfin

I can't say enough good things about Jellyfin. It ticks all the boxes for me. It's open source, 100% free, has no central server, data collection, or tracking. It also plays all of the music, movies, and TV shows I have on my drive.

There are clients for just about every platform, or you can listen or view in your web browser. Currently, I'm listening to my music on the app for Debian and Ubuntu and it works great.

Image by:

(DJ Billings, CC BY-SA 40)

Setting up Jellyfin

Many people, more brilliant than I, have created detailed instructions on Jellyfin's setup, so I would rather point to their work. Plus, Jellyfin has excellent documentation. But I'll lay out the basics, so you know what to expect if you want to do this yourself.

Command-line

First, you'll need to be confident using the terminal to write commands or be willing to learn. I encourage trying it because I've become highly skilled and confident in Bash just by doing this project.

File organization

It's a good idea to have your media files well-organized before you start. Changing things later is possible, but you'll have fewer issues with Jellyfin recognizing your files if they're categorized well.

Jellyfin uses the MusicBrainz and AudioDb databases to recognize your files and I've found very few errors. Seeing the covers for movies and music populate after it finds your catalog is very satisfying. I've had to upload my artwork a few times, but it's an easy process. You can also replace the empty or generic category images with your own art.

Users

You can add users and adjust their level of control. For example, in my family, I'm the only one with the ability to delete music. There are also parental controls available.

Process and resources

Here's the general process and some of the resources I used to set up my Raspberry Pi media server using Jellyfin:

  1. Install the OS of your choice on your Pi.

  2. Install Jellyfin on your Pi.

  3. If you're using a big external drive for storage, format it so that it uses a file system usable by you Pi, but also convenient for you. I've found exFAT to be the easiest file system of all the major platforms to use.

  4. Configure the firewall on your Pi so that other computers can access the Jellyfin library.

  5. On your personal computer install a Jellyfin Media Player.

Breaking away

Whenever someone finds an open source solution, an angel gets its wings. The irony is that I was pushed into finding a non-proprietary solution by one of the biggest closed source companies on the planet. What I love most about the system I've created is that I am in control of all aspects of it, good and bad.

Jellyfin fulfills everything on my media library wishlist, making it the ideal open source alternative to Apple Music and other proprietary software tools.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Raspberry Pi Audio and music Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Package a new Python module in 4 steps

Thu, 06/30/2022 - 15:00
Package a new Python module in 4 steps Sumantro Mukherjee Thu, 06/30/2022 - 03:00 Register or Login to like Register or Login to like

When you install an application, you're usually installing a package that contains the executable code for an application and important files such as documentation, icons, and so on. On Linux, applications are commonly packaged as RPM or DEB files, and users install them with the dnf or apt commands, depending on the Linux distribution. However, new Python modules are released virtually every day, so you could easily encounter a module that hasn't yet been packaged. And that's exactly why the pyp2rpm command exists.

Recently, I tried to install a module called python-concentration. It didn't go well:

$ sudo dnf install python-concentration
Updating Subscription Management repositories.
Last metadata expiration check: 1:23:32 ago on Sat 11 Jun 2022 06:37:25.
No match for argument: python-concentration
Error: Unable to find a match: python-concentration

It’s a PyPi package, but it's not yet available as an RPM package. The good news is that you can build an RPM yourself with a relatively simple process using pyp2rpm.

You'll need two directories to get started:

$ mkdir rpmbuild
$ cd rpmbuild && mkdir SPECS

You'll also need to install pyp2rpm:

$ sudo dnf install pyp2rpm

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles 1. Generate the spec file

The foundation of any RPM package is a file called the spec file. This file contains all the information about how to build the package, which dependencies it needs, the version of the application it provides, what files it installs, and more. When pointed to a Python module, pyp2rpm generates a spec file for it, which you can use to build an RPM.

Using python-concentration as an arbitrary example, here's how to generate a spec file:

$ pyp2rpm concentration > ~/rpmbuild/SPECS/concentration.spec

And here's the file it generates:

# Created by pyp2rpm-3.3.8
%global pypi_name concentration
%global pypi_version 1.1.5

Name:           python-%{pypi_name}
Version:        %{pypi_version}
Release:        1%{?dist}
Summary:        Get work done when you need to, goof off when you don't

License:        None
URL:            None
Source0:        %{pypi_source}
BuildArch:      noarch

BuildRequires:  python3-devel
BuildRequires:  python3dist(setuptools)

%description
Concentration [![PyPI version]( [![Test Status]( [![Lint Status]( [![codecov](

%package -n     python3-%{pypi_name}
Summary:        %{summary}
%{?python_provide:%python_provide python3-%{pypi_name}}

Requires:       (python3dist(hug) >= 2.6.1 with python3dist(hug) < 3~~)
Requires:       python3dist(setuptools)
%description -n python3-%{pypi_name}
Concentration [![PyPI version]( [![Test Status]( [![Lint Status]( [![codecov](


%prep
%autosetup -n %{pypi_name}-%{pypi_version}

%build
%py3_build

%install
%py3_install

%files -n python3-%{pypi_name}
%license LICENSE
%doc README.md
%{_bindir}/concentration
%{python3_sitelib}/%{pypi_name}
%{python3_sitelib}/%{pypi_name}-%{pypi_version}-py%{python3_version}.egg-info

%changelog
*  - 1.1.5-1
- Initial package.2. Run rpmlint

To ensure that the spec file is up to standards, run the rpmlint command on the file:

$ rpmlint ~/rpmbuild/SPEC/concentration.spec
error: bad date in %changelog: - 1.1.5-1
0 packages and 1 specfiles checked; 0 errors, 0 warnings.

It seems the changelog entry requires a date.

%changelog
* Sat Jun 11 2022 Tux <tux@example.com> - 1.1.5-1

Try rpmlint again:

$ rpmlint ~/rpmbuild/SPEC/concentration.spec
0 packages and 1 specfiles checked; 0 errors, 0 warnings.

Success!

3. Download the source code

To build an RPM package, you must download the code you're packaging up. The easy way to do this is to parse your spec file to find the source code's location on the Internet.

First, install the spectool command with dnf:

$ sudo dnf install spectool

Then use it to download the source code:

$ cd ~/rpmbuild
$ spectool -g -R SPEC/concentration.spec
Downloading: https://files.pythonhosted.org/...concentration-1.1.5.tar.gz
   6.0 KiB / 6.0 KiB    [=====================================]
Downloaded: concentration-1.1.5.tar.gz

This creates a SOURCES directory and places the source code archive into it.

4. Build the source package

Now you have a valid spec file, so it's time to build the source package with the rpmbuild command. If you don't have rpmbuild yet, install the rpm-build package with dnf (or accept your terminal's offer to install that package when you attempt to use the rpmbuild command).

$ cd ~/rpmbuild
$ spectool -g -R SPEC/concentration.spec
Downloading: https://files.pythonhosted.org/...concentration-1.1.5.tar.gz
   6.0 KiB / 6.0 KiB    [=====================================]
Downloaded: concentration-1.1.5.tar.gz

The -bs option stands for build source. This option gives you an src.rpm file, an all-purpose package that must be rebuilt for a specific architecture.

Build an installable RPM for your system:

$ rpmbuild –rebuild SRPMS/python-concentration-1.1.5-1.el9.src.rpm
error: Failed build dependencies:
        python3-devel is needed by python-concentration-1.1.5-1.el9.noarch

It looks like this package requires the development libraries of Python. Install them to continue with the build. This time the build succeeds and renders a lot more output (which I abbreviate here for clarity):

$ sudo dnf install python3-devel -y
$ rpmbuild –rebuild SRPMS/python-concentration-1.1.5-1.el9.src.rpm
[...]
Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.TYA7l2
+ umask 022
+ cd /home/bogus/rpmbuild/BUILD
+ rm -rf concentration-1.1.5
+ RPM_EC=0
++ jobs -p
+ exit 0

Your RPM package has been built in the RPMS subdirectory. Install it as usual with dnf:

$ sudo dnf install RPMS/noarch/python3-concentration*rpmWhy not just use PyPi?

It's not absolutely necessary to make a Python module into an RPM. Installing a module with PyPi is also acceptable, but PyPi adds another package manager to your personal list of things to check and update. When you install an RPM using dnf, you have a complete listing of what you've installed on your system. Thanks to pyp2rpm, the process is quick, easy, and automatable.

The pyp2rpm command makes it possible to create an RPM package and automate the process.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Python What to read next How to install pip to manage PyPI packages easily This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

ABCs of FreeDOS: 26 commands I use all the time

Wed, 06/29/2022 - 15:00
ABCs of FreeDOS: 26 commands I use all the time Jim Hall Wed, 06/29/2022 - 03:00 Register or Login to like Register or Login to like

One of my family's first computers ran a command-line operating system called DOS, the "Disk Operating System." I grew up with DOS, and learned to leverage the command line to make my work easier. And so did a lot of other people. We loved DOS so much that in 1994, we created the FreeDOS Project. Today on June 29, we celebrate 28 years of FreeDOS.

If you're new to FreeDOS, you may be confused about how to use the different command line programs that come with it. Let's get started with 26 of my favorite FreeDOS commands. To learn more, add the /? option after most commands to get more information:

C:\>attrib /?
ATTRIB v2.1 - Displays or changes file attributes.
Copyright (c) 1998-2003, licensed under GPL2.

Syntax: ATTRIB { options | [path][file] | /@[list] }

Options:

  +H Sets the Hidden attribute.     -H  Clears the Hidden attribute.
  +S Sets the System attribute.     -S  Clears the System attribute.
  +R Sets the Read-only attribute.  -R  Clears the Read-only attribute.
  +A Sets the Archive attribute.    -A  Clears the Archive attribute.

  /S Process files in all directories in the specified path(es).
  /D Process directory names for arguments with wildcards.
  /@ Process files, listed in the specified file [or in stdin].

Examples:

  attrib file -rhs
  attrib +a -r dir1 dir2*.dat /s
  attrib -hs/sd /@list.txt *.*A is for ATTRIB

The ATTRIB program displays or changes a file's attributes. An attribute can be one of four values: Hidden (H), System (S), Read-only (R), and Archive (A).

Files marked as Hidden don't display in a directory listing. For example, suppose you want to "hide" a file called SECRET.TXT so no one would know it was there. First, you can show the attributes on that file to see its current settings:

C:\FILES>attrib secret.txt
[----A] SECRET.TXT

To hide this file, turn on the Hidden attribute by using the plus (+) operator, like this:

C:\FILES>attrib +h secret.txt
[----A] -> [-H--A] SECRET.TXT
C:\FILES>dir
 Volume in drive C is FREEDOS2022
 Volume Serial Number is 333D-0B18

 Directory of C:\FILES

.                   <DIR>  05-27-2022  9:22p
..                  <DIR>  05-27-2022  9:22p
         0 file(s)              0 bytes
         2 dir(s)     279,560,192 bytes free

Another common way to use ATTRIB is by manipulating the Read-only attribute, so you don't accidentally overwrite an important file. Suppose you want to protect the SECRET.TXT file so you can't delete or change it. Turn on the Read-only attribute like this, with the +R modifier:

C:\FILES>attrib +r secret.txt
[----A] -> [---RA] SECRET.TXT
C:\FILES>del secret.txt
C:\FILES\SECRET.TXT: Permission denied
no file removed.

[ Related read: How I use the attr command with my Linux filesystem ]

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources B is for BEEP

If you need to add a little pizzazz to a batch file, you can use the BEEP command to get the user's attention. BEEP doesn't display anything to the screen, but simply generates a classic “beep” tone.

Note that BEEP uses the PC's built-in speaker to make the “beep” sound. If you boot FreeDOS using a virtual machine, check that your system is set up to correctly emulate the PC speaker. Otherwise, you will not hear anything.

C is for CD

Like Linux, FreeDOS supports directories, which allow you to organize your files in a way that makes sense to you. For example, you might keep all of your files in a directory called FILES, and you might have other directories inside FILES for certain kinds of files, such as DOCS for word processor files, or SPRDSHT for spreadsheet files.

You can navigate into a directory using the CD or change directory command. The CHDIR command is the same as CD, if you prefer to use that syntax.

To change into a new directory, use the CD command with the destination directory:

C:\>cd files
C:\FILES>cd sprdsht
C:\FILES\SPRDSHT>dir
Volume in drive C is FREEDOS2022
Volume Serial Number is 333D-0B18
 
Directory of C:\FILES\SPRDSHT
 
. <DIR> 05-27-2022 9:59p
.. <DIR> 05-27-2022 9:59p
FIB WKS 2,093 05-27-2022 10:07p
LAB1 WKS 2,087 05-27-2022 10:10p
MIS100 WKS 2,232 05-27-2022 10:05p
3 file(s) 6,412 bytes
2 dir(s) 279,527,424 bytes free

You don't have to navigate one directory at a time. You can instead provide the full directory path you want to change to, with one CD command:

C:\>cd \files\sprdsht
C:\FILES\SPRDSHT>dir
Volume in drive C is FREEDOS2022
Volume Serial Number is 333D-0B18
 
Directory of C:\FILES\SPRDSHT
 
.  <DIR> 05-27-2022 9:59p
.. <DIR> 05-27-2022 9:59p
FIB WKS 2,093 05-27-2022 10:07p
LAB1 WKS 2,087 05-27-2022 10:10p
MIS100 WKS 2,232 05-27-2022 10:05p
3 file(s) 6,412 bytes
2 dir(s) 279,527,424 bytes freeD is for DELTREE

If you need to delete a single file, you can use the DEL command. To remove an empty directory, you can use the RMDIR or RD commands. But what if you want to delete a directory that has lots of files and subdirectories inside it?

A directory with other directories inside it is called a directory tree. You can delete an entire directory tree with the DELTREE command. For example, to delete your FILES directory, including all the files and directories it contains, type this command:

C:\>deltree files

    [DEFAULT-BUILD v1.02g] of DELTREE.  The "ROOT-SAFETY-CHECK" is enabled.

Delete directory "C:\FILES"
and all its subdirectories?

[Y] [N] [Q], [ENTER] ?  Y

==> Deleting "C:\FILES" ...

You can easily and quickly wipe out a lot of work with a single DELTREE command, so the FreeDOS DELTREE prompts you to ask if this is really something you want to do. Use this command carefully.

E is for EDIT

If you need to edit a text file on FreeDOS, the EDIT program lets you do that quickly and easily. For example, to start editing a file called HELLO.TXT, type EDIT HELLO.TXT. If the HELLO.TXT file already exists, EDIT opens the file for editing. If HELLO.TXT doesn't exist yet, then EDIT starts a new file for you.

Image by:

(Jim Hall, CC BY-SA 40)

FreeDOS EDIT uses a friendly interface that should be easy for most people to use. Use the menus to access the various features of EDIT, including saving a file, opening a new file, or exiting the editor. To access the menus, tap the Alt key on your keyboard, then use the arrow keys to get around and Enter to select an action.

Image by:

(Jim Hall, CC BY-SA 40)

F is for FIND

If you need to find text in a file, the FIND command does the job. Similar to fgrep on Linux, FIND prints lines that contain a string. For example, to check the "Menu Default" entry in the FDCONFIG.SYS file, use FIND like this:

C:\>find "MENUDEFAULT" fdconfig.sys

---------------- FDCONFIG.SYS
MENUDEFAULT=2,5

If you aren't sure if the string you want to find uses uppercase or lowercase letters, add the /I option to ignore the letter case:

C:\>find /i "menudefault" fdconfig.sys
---------------- FDCONFIG.SYS
MENUDEFAULT=2,5

[ Download the cheat sheet: Linux find command ]

G is for GRAPHICS

If you want to capture a screen, you might use the PrtScr (Print Screen) key on your keyboard to print the text on your monitor directly to a printer. However, this only works for plain text. If you want to print graphic screens, you need to load the GRAPHICS program.

GRAPHICS supports different printer types, including HP PCL printers, Epson dot matrix printers, and PostScript-compatible printers. For example, if you have an HP laser printer connected to your computer, you can load support for that printer by typing this command:

C:\>graphics hpdefault
Running in MS GRAPHICS compatibility mode...
Using HPPCL type for type hpdefault
  If you think this is not correct, mail me (see help text).
Printing black as white and white as black
which internally uses /I of this GRAPHICS.
You can use the following command directly instead of
GRAPHICS [your options] in the future:
LH GRAPH-HP /I
Note that GRAPH-HP allows extra options:
  /E economy mode, /1 use LPT1, /2 use LPT2, /3 use LPT3,
  /R for random instead of ordered dither
  /C for 300dpi instead of 600dpi
Driver to make 'shift PrtScr' key work
even in CGA, EGA, VGA, MCGA graphics
modes loaded, in HP PCL mode.H is for HELP

If you're new to FreeDOS, you can get hints on how to use the different commands by typing HELP. This brings up the FreeDOS Help system, with documentation on all the commands:

Image by:

(Jim Hall, CC BY-SA 40)

[ Read also: The only Linux command you need to know ]

I is for IF

You can add conditional statements to your command line or batch file using the IF statement. IF makes a simple test, then executes a single command. For example, to print the result "It's there" if a certain file exists, you can type:

C:\>if exist kernel.sys echo It's there
It's there

If you want to test the opposite, use the NOT keyword before the test. For example, to print "Not equal" if two strings are not the same value, type this:

C:\>if not "a"=="b" echo Not equal
Not equalJ is for JOIN

Early DOS versions were fairly simple; the first version of DOS didn't even support directories. To provide backwards compatibility for those older programs, we have the JOIN program as a neat workaround. JOIN replaces a path with a drive letter, so you can put an old program in its own subdirectory, but access it using a single drive letter.

Let's say you had an old application called VC that doesn't understand directories. To keep working with VC, you can "join" its path to a drive letter. For example:

JOIN V: D:\VC

FreeDOS implements JOIN as SWSUBST, which also combines features from the similar SUBST command. To join the D:\VC path to a new drive letter called V:, type:

C:\>swsubst v: d:\vc
C:\>dir v:
Volume in drive V is DATA
Volume Serial Number is 212C-1DF8

Directory of V:\

. <DIR> 02-21-2022 10:35p
.. <DIR> 02-21-2022 10:35p
VC COM 27,520 07-14-2019 4:48p

1 file(s) 27,520 bytes
2 dir(s) 48,306,176 bytes freeK is for KEYB

DOS assumes a US keyboard layout by default. If your keyboard is different, you can use the KEYB command to load a new keyboard language layout. For example, to load a German keyboard layout, type:

C:\>keyb gr
FreeDOS KEYB 2.01 - (c) Aitor Santamaría Merino - GNU GPL 2.0
Keyboard layout : C:\FREEDOS\BIN\KEYBOARD.SYS:GR [858] (3)L is for LABEL

FreeDOS names each floppy drive and hard drive with a label. These labels provide a handy way to identify what a disk might contain. The LABEL command was immensely useful when you needed store files across a number of different floppy disks, where you might label one floppy "Data" and another floppy as "Games."

To assign a new label to a drive, or to change the existing label on a drive, use LABEL like this:

D:\>label d: data
D:\>dir /w
Volume in drive D is DATA
Volume Serial Number is 212C-1DF8

Directory of D:\

[123] [ABILITY] [ASEASY] [GAMES2] [QUATTRO]
[SRC] [TEMP] [THE] [VC] [WORD]
[WS400] EDLIN16.EXE EDLIN32.EXE MYENV.BAT
3 file(s) 113,910 bytes
11 dir(s) 48,306,176 bytes freeM is for MEM

Running programs and loading drivers takes memory. To see how much memory your system has, and how much memory is free for running DOS programs, use the MEM command:

C:\>mem

Memory Type Total Used Free
---------------- -------- -------- --------
Conventional 639K 11K 628K
Upper 104K 18K 86K
Reserved 281K 281K 0K
Extended (XMS) 15,224K 537K 14,687K
---------------- -------- -------- --------
Total memory 16,248K 847K 15,401K
 
Total under 1 MB 743K 29K 714K
 
Total Expanded (EMS) 8,576K (8,781,824 bytes)
Free Expanded (EMS) 8,192K (8,388,608 bytes)
 
Largest executable program size 628K (643,104 bytes)
Largest free upper memory block 84K ( 85,728 bytes)
FreeDOS is resident in the high memory area.N is for NANSI

If you want to add a little color to the FreeDOS command line, you can use ANSI escape sequences. These sequences are so named because each starts with code 33 (the ESC character) and a special sequence of characters, as defined by the American National Standards Institute, or ANSI.

FreeDOS supports ANSI escape sequences through the NANSI.SYS driver. With NANSI loaded, your FreeDOS console interprets the ANSI escape sequences, such as setting the text colors.

Image by:

(Jim Hall, CC BY-SA 40)

O is for oZone

FreeDOS is a command line operating system, but some folks prefer to use a graphical user interface instead. That's why FreeDOS 1.3 includes several graphical desktops. One desktop I like is called oZone, which provides a sleek, modern-looking interface.

Image by:

(Jim Hall, CC BY-SA 40)

Note that oZone has a few annoying bugs, and could use some love from a developer out there. If you're interested in making oZone even better, feel free to download the source code.

P is for PROMPT

The standard FreeDOS command-line prompt tells you where you are in the filesystem. When you first boot FreeDOS, your prompt looks like C:\>, which means the "\" (root) directory on the "C:" drive. The ">" character indicates where you can type a command.

If you prefer different information on your prompt, use the PROMPT command to change it. You can represent different information with a special code preceded with $, such as $D for the date and $T for the time. For example, you can make your FreeDOS command line look like a Linux prompt with the $$ instruction, to print a single dollar sign:

C:\>prompt $$ $

Type PROMPT /? to see a list of all special codes.

Q is for QBASIC

FreeDOS doesn't actually have QBASIC. That was a proprietary BASIC programming environment for MS-DOS. Instead, we provide several open source compilers, including some for BASIC programming.

The FreeBASIC Compiler should compile most QBASIC programs out there. Here's a simple "guess the number" example:

dim number as integer
dim guess as integer
randomize timer
number = int( 10 * rnd() ) + 1
print "Guess the number from 1 to 10:"
do
input guess
if guess < number then print "Too low"
if guess > number then print "Too high"
loop while guess <> number
print "That's right!"

Use the FBC command to compile the program with FreeBASIC:

C:\DEVEL\FBC>fbc guess.bas

Here's a quick demonstration of that simple game:

C:\DEVEL\FBC>guess
Guess the number from 1 to 10:
? 5
Too high
? 3
Too low
? 4
That's right!

[ Read next: Learn Fortran by writing a "guess the number" game ]

R is for REM

Comments are great when writing programs; comments help us understand what the program is supposed to do. You can do the same in batch files using the REM command. Anything after the REM is ignored in a batch file.

REM this is a commentS is for SET

The FreeDOS command line uses a set of variables called environment variables that let you customize your system. You can set these variables with the SET command. For example, use the DIRCMD variable to control how DIR arranges directory listings. To set the DIRCMD variable, use the SET command:

SET DIRCMD=/O:GNE

This tells DIR to order (O) the output by grouping (G) directories first, then sorting the results by name (N) and extension (E).

T is for TYPE

The TYPE command is one of the most-often used DOS commands. TYPE displays the contents of a file, similar to cat on Linux.

C:\DEVEL>type hello.c
#include

int
main()
{
puts("Hello world");
return 0;
}U is for UNZIP

On Linux, you may be familiar with the standard Unix archive command: tar. There's a version of tar on FreeDOS too (and a bunch of other popular archive programs), but the de facto standard archiver on DOS is ZIP and UNZIP. Both are installed in FreeDOS 1.3 by default.

Let's say I had a zip archive of some files. If I want to extract the entire Zip file, I could just use the UNZIP command and provide the Zip file as a command-line option. That extracts the archive starting at my current working directory. Unless I'm restoring a previous version of something, I usually don't want to overwrite my current files. In that case, I will want to extract the archive to a new directory. You can specify the destination path with the -d ("destination") command-line option:

D:\SRC>unzip monkeys.zip -d monkeys.new
Warning: TZ environment variable not found, cannot use UTC times!!
Archive: monkeys.zip
creating: monkeys.new/monkeys/
inflating: monkeys.new/monkeys/banana.c
inflating: monkeys.new/monkeys/banana.obj
inflating: monkeys.new/monkeys/banana.exe
creating: monkeys.new/monkeys/putimg/
inflating: monkeys.new/monkeys/putimg/putimg.c
inflating: monkeys.new/monkeys/putimg/putimg.obj
inflating: monkeys.new/monkeys/putimg/putimg.exe

To learn more about the ZIP and UNZIP commands, read How to archive files on FreeDOS.

V is for VER

In the old days of DOS, the VER command reported the DOS distribution you were running, such as "MS-DOS 5.0.D" With FreeDOS, the VER command gives you additional details, such as the version of the FreeDOS Shell:

C:\DEVEL>ver
FreeCom version 0.85a - WATCOMC - XMS_Swap [Jul 10 2021 19:28:06]

If you also want to see the FreeDOS kernel version and the DOS compatibility level, add the /R option:

C:\DEVEL>ver /r

FreeCom version 0.85a - WATCOMC - XMS_Swap [Jul 10 2021 19:28:06]

DOS version 7.10
FreeDOS kernel 2043 (build 2043 OEM:0xfd) [compiled May 14 2021]W is for WHICH

The FreeDOS command line can run programs from a list of different directories, identified in a PATH variable. You can use the WHICH command to identify exactly where a program is located. Just type WHICH plus the name of the program you want to locate:

C:\>which xcopy xcopy C:\FREEDOS\BIN\XCOPY.EXE X is for XCOPY

The COPY command copies only files from one place to another. If you want to extend the copy to include any directories, use the XCOPY command instead. I usually add the /E option to include all subdirectories, including empty ones, so I can copy the entire directory tree. This makes an effective backup of any project I am working on:

D:\SRC>xcopy /e monkeys monkeys.bak
Does MONKEYS.BAK specify a file name
or directory name on the target (File/Directory)? d
Copying D:\SRC\MONKEYS\PUTIMG\PUTIMG.C
Copying D:\SRC\MONKEYS\PUTIMG\PUTIMG.OBJ
Copying D:\SRC\MONKEYS\PUTIMG\PUTIMG.EXE
Copying D:\SRC\MONKEYS\BANANA.C
Copying D:\SRC\MONKEYS\BANANA.OBJ
Copying D:\SRC\MONKEYS\BANANA.EXE
6 file(s) copiedY is for Yellow

This isn't a command, but interesting trivia about how DOS displays colors. If you've looked carefully at FreeDOS, you've probably noticed that text only comes in a limited range of colors—sixteen text colors, and eight background colors.

The IBM 5153 color display presented color to the user by lighting up tiny red, green, and blue phosphor dots at different brightness levels to create a palette of 16 text colors and 8 background colors. Early PCs could only display the background color in the "normal intensity" level; only text colors could use bright colors.

If you look at the text colors, you have black, blue, green, cyan, red, magenta, orange, and white. The "bright" versions of these colors are bright black (a dull gray), bright blue, bright green, bright cyan, bright red, bright magenta, yellow, and bright white. The "bright" version of orange is actually yellow. There is no "bright orange."

If you want to learn more about text colors, read our article about Why FreeDOS has 16 colors.

Z is for ZIP

You can use ZIP at the DOS command line to create archives of files and directories. This is a handy way to make a backup copy of your work or to release a "package" to use in a future FreeDOS distribution. For example, let's say I wanted to make a backup of my project source code, which contains these source files:

D:\SRC>zip -9r monkeys.zip monkeys
zip warning: TZ environment variable not found, cannot use UTC times!!
adding: monkeys/ (stored 0%)
adding: monkeys/banana.c (deflated 66%)
adding: monkeys/banana.obj (deflated 26%)
adding: monkeys/banana.exe (deflated 34%)
adding: monkeys/putimg/ (stored 0%)
adding: monkeys/putimg/putimg.c (deflated 62%)
adding: monkeys/putimg/putimg.obj (deflated 29%)
adding: monkeys/putimg/putimg.exe (deflated 34%)

ZIP sports a ton of command-line options to do different things, but the command line options I use most are -r to process directories and subdirectories recursively, and -9 to provide the maximum compression possible. ZIP and UNZIP use a Unix-like command line, so you can combine options behind the dash: -9r gives maximum compression and include subdirectories in the Zip file.

For more details about how to use the ZIP and UNZIP commands, read How to archive files on FreeDOS.

New FreeDOS guides

Ready for the next step in your FreeDOS journey? Check out our new eBooks and start experimenting with FreeDOS now!

A guide to using FreeDOS

An advanced guide to FreeDOS internals

On its 28th anniversary, I'm pleased to share my top 26 favorite FreeDOS commands.

Image by:

Jim Hall, CC BY-SA 4.0.

FreeDOS What to read next How a college student founded a free and open source operating system This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux su vs sudo: what's the difference?

Tue, 06/28/2022 - 15:00
Linux su vs sudo: what's the difference? David Both Tue, 06/28/2022 - 03:00 1 reader likes this 1 reader likes this

Both the su and the sudo commands allow users to perform system administration tasks that are not permitted for non-privileged users—that is, everyone but the root user. Some people prefer the sudo command: For example, Seth Kenlon recently published "5 reasons to use sudo on Linux", in which he extols its many virtues.

I, on the other hand, am partial to the su command and prefer it to sudo for most of the system administration work I do. In this article, I compare the two commands and explain why I prefer su over sudo but still use both.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Historical perspective of sysadmins

The su and sudo commands were designed for a different world. Early Unix computers required full-time system administrators, and they used the root account as their only administrative account. In this ancient world, the person entrusted with the root password would log in as root on a teletype machine or CRT terminal such as the DEC VT100, then perform the administrative tasks necessary to manage the Unix computer.

The root user would also have a non-root account for non-root activities such as writing documents and managing their personal email. There were usually many non-root user accounts on those computers, and none of those users needed total root access. A user might need to run one or two commands as root, but very infrequently. Many sysadmins log in as root to work as root and log out of our root sessions when finished. Some days require staying logged in as root all day long. Most sysadmins rarely use sudo because it requires typing more than necessary to run essential commands.

These tools both provide escalated privileges, but the way they do so is significantly different. This difference is due to the distinct use cases for which they were originally intended.

sudo

The original intent of sudo was to enable the root user to delegate to one or two non-root users access to one or two specific privileged commands they need regularly. The sudo command gives non-root users temporary access to the elevated privileges needed to perform tasks such as adding and deleting users, deleting files that belong to other users, installing new software, and generally any task required to administer a modern Linux host.

Allowing the users access to a frequently used command or two that requires elevated privileges saves the sysadmin a lot of requests from users and eliminates the wait time. The sudo command does not switch the user account to become root; most non-root users should never have full root access. In most cases, sudo lets a user issue one or two commands then allows the privilege escalation to expire. During this brief time interval, usually configured to be 5 minutes, the user may perform any necessary administrative tasks that require elevated privileges. Users who need to continue working with elevated privileges but are not ready to issue another task-related command can run the sudo -v command to revalidate the credentials and extend the time for another 5 minutes.

Using the sudo command does have the side effect of generating log entries of commands used by non-root users, along with their IDs. The logs can facilitate a problem-related postmortem to determine when users need more training. (You thought I was going to say something like "assign blame," didn't you?)

su

The su command is intended to allow a non-root user to elevate their privilege level to that of root—in fact, the non-root user becomes the root user. The only requirement is that the user know the root password. There are no limits on this because the user is now logged in as root.

No time limit is placed on the privilege escalation provided by the su command. The user can work as root for as long as necessary without needing to re-authenticate. When finished, the user can issue the exit command to revert from root back to their own non-root account.

Controversy and change

There has been some recent disagreement about the uses of su versus sudo.

Real [Sysadmins] don't use sudo. —Paul Venezia

Venezia contends in his InfoWorld article that sudo is used as an unnecessary prop for many people who act as sysadmins. He does not spend much time defending or explaining this position; he just states it as a fact. And I agree with him—for sysadmins. We don't need the training wheels to do our jobs. In fact, they get in the way.

However,

The times they are a'changin.' —Bob Dylan

Dylan was correct, although he was not singing about computers. The way computers are administered has changed significantly since the advent of the one-person, one-computer era. In many environments, the user of a computer is also its administrator. This makes it necessary to provide some access to the powers of root for those users.

Some modern distributions, such as Ubuntu and its derivatives, are configured to use the sudo command exclusively for privileged tasks. In those distros, it is impossible to log in directly as the root user or even to su to root, so the sudo command is required to allow non-root users any access to root privileges. In this environment, all system administrative tasks are performed using sudo.

This configuration is possible by locking the root account and adding the regular user account(s) to the wheel group. This configuration can be circumvented easily. Try a little experiment on any Ubuntu host or VM. Let me stipulate the setup here so you can reproduce it if you wish. I installed Ubuntu 16.04 LTS1 and installed it in a VM using VirtualBox. During the installation, I created a non-root user, student, with a simple password for this experiment.

Log in as the user student and open a terminal session. Look at the entry for root in the /etc/shadow file, where the encrypted passwords are stored.

student@ubuntu1:~$ cat /etc/shadow
cat: /etc/shadow: Permission denied

Permission is denied, so we cannot look at the /etc/shadow file. This is common to all distributions to prevent non-privileged users from seeing and accessing the encrypted passwords, which would make it possible to use common hacking tools to crack those passwords.

Now let's try to su - to root.

student@ubuntu1:~$ su -
Password: <Enter root password – but there isn't one>
su: Authentication failure

This fails because the root account has no password and is locked out. Use the sudo command to look at the /etc/shadow file.

student@ubuntu1:~$ sudo cat /etc/shadow
[sudo] password for student: <enter the student password>
root:!:17595:0:99999:7:::
<snip>
student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
<snip>

I have truncated the results to show only the entry for the root and student users. I have also shortened the encrypted password so the entry will fit on a single line. The fields are separated by colons (:) and the second field is the password. Notice that the password field for root is a bang, known to the rest of the world as an exclamation point (!). This indicates that the account is locked and that it cannot be used.

Now all you need to do to use the root account as a proper sysadmin is to set up a password for the root account.

student@ubuntu1:~$ sudo su -
[sudo] password for student: <Enter password for student>
root@ubuntu1:~# passwd root
Enter new UNIX password: <Enter new root password>
Retype new UNIX password: <Re-enter new root password>
passwd: password updated successfully
root@ubuntu1:~#

Now you can log in directly on a console as root or su directly to root instead of using sudo for each command. Of course, you could just use sudo su - every time you want to log in as root, but why bother?

Please do not misunderstand me. Distributions like Ubuntu and their up- and downstream relatives are perfectly fine, and I have used several of them over the years. When using Ubuntu and related distros, one of the first things I do is set a root password so that I can log in directly as root. Other distributions, like Fedora and its relatives, now provide some interesting choices during installation. The first Fedora release where I noticed this was Fedora 34, which I have installed many times while writing an upcoming book.

One of those installation options can be found on the page to set the root password. The new option allows the user to choose "Lock root account" in the way an Ubuntu root account is locked. There is also an option on this page that allows remote SSH login to this host as root using a password, but that only works when the root account is unlocked. The second option is on the page that allows the creation of a non-root user account. One of the options on this page is "Make this user administrator." When this option is checked, the user ID is added to a special group called the wheel group, which authorizes members of that group to use the sudo command. Fedora 36 even mentions the wheel group in the description of that checkbox.

More than one non-root user can be set as an administrator. Anyone designated as an administrator using this method can use the sudo command to perform all administrative tasks on a Linux computer. Linux only allows the creation of one non-root user during installation, so other new users can be added to the wheel group when created. Existing users can be added to the wheel group by the root user or another administrator directly by using a text editor or the usermod command.

In most cases, today's administrators need to do only a few essential tasks such as adding a new printer, installing updates or new software, or deleting software that is no longer needed. These GUI tools require a root or administrative password and will accept the password from a user designated as an administrator.

How I use su and sudo on Linux

I use both su and sudo. They each have an important place in my sysadmin toolbox.

I can't lock the root account because I need to use it to run my Ansible playbooks and the rsbu Bash program I wrote to perform backups. Both of these need to be run as root, and so do several other administrative Bash scripts I have written. I use the su command to switch users to the root user so I can perform these and many other common tasks. Elevating my privileges to root using su is especially helpful when performing problem determination and resolution. I really don't want a sudo session timing out on me while I am in the middle of my thought process.

I use the sudo command for tasks that need root privilege when a non-root user needs to perform them. I set the non-root account up in the sudoers file with access to only those one or two commands needed to complete the tasks. I also use sudo myself when I need to run only one or two quick commands with escalated privileges.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles Conclusions

The tools you use don't matter nearly as much as getting the job done. What difference does it make if you use vim or Emacs, systemd or SystemV, RPM or DEB, sudo or su? The bottom line here is that you should use the tools with which you are most comfortable and that work best for you. One of the greatest strengths of Linux and open source is that there are usually many options available for each task we need to accomplish.

Both su and sudo have strengths, and both can be secure when applied properly for their intended use cases. I choose to use both su and sudo mostly in their historical roles because that works for me. I prefer su for most of my own work because it works best for me and my workflow.

Share how you prefer to work in the comments!

This article is taken from Chapter 19 of my book The Linux Philosophy for Sysadmins (Apress, 2018) and is republished with permission.

A comparison of Linux commands for escalating privileges for non-root users.

Image by:

Opensource.com

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why organizations need site reliability engineers

Tue, 06/28/2022 - 15:00
Why organizations need site reliability engineers Robert Kimani Tue, 06/28/2022 - 03:00 Register or Login to like Register or Login to like

In this final article that concludes my series about best practices for effective site reliability engineering (SRE), I cover some of the practical applications of site reliability engineering.

There are some significant differences between software engineering and systems engineering.

Software engineering
  • Focuses on software development and engineering only.
  • Involves writing code to create useful functionality.
  • Time is spent on developing repeatable and reusable software that can be easily extended.
  • Has problem-solving orientation.
  • Software engineering aids the SRE.
Systems engineering
  • Focuses on the whole system including software, hardware and any associated technologies.
  • Time is spent on building, analyzing, and managing solutions.
  • Deals with defining characteristics of a system and feeds requirements to software engineering.
  • Has systems-thinking orientation.
  • Systems engineering enables SRE.

The site reliability engineer (SRE) utilizes both software engineering and system engineering skills, and in so doing adds value to an organization.

As the SRE team runs production systems, an SRE produces the most impactful tools to manage and automate manual processes. Software can be built faster when an SRE is involved, because most of the time the SRE creates software for their own use. As most of the tasks for an SRE are automated, which entails a lot of coding, this introduces a healthy mix of development and operations, which is great for site reliability.

Finally, an SRE enables an organization to automatically scale rapidly whether it's scaling up or scaling down.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles SRE and DevSecOps

An SRE helps build end to end effective monitoring systems by utilizing logs, metrics and traces. An SRE enables fast, effective, and reliable rollbacks and automatic scaling up or down infrastructure as needed. These are especially effective during a security breach.

With the advent of cloud and container-based architectures, data processing pipelines have become a prominent component in IT architectures. An SRE helps configure the most restrictive access to data processing pipelines.

[ Download now: A guide to implementing DevSecOps ]

Finally, an SRE helps develop tools and procedures to handle incidents. While most of these incidents focus on IT operations and reliability, it can be easily extended to security. For example, DevSecOps deals with integrating development, security, and operations with heavy emphasis on automation. It's a field where development, security and operations teams work together to support and maintain an organization's applications and infrastructure.

Designing SRE and pre-production computing environments

A pre-production or non-production environment is an environment used by an SRE to develop, deploy, and test.

The non-production environment is the testing ground for automation. But it's not just application code that requires a non-production environment. Any associated automated processes, primarily the ones that an SRE develops, requires a pre-production environment. Most organizations have more than one pre-production environment. By resembling production as much as possible, the pre-production environment improves confidence in releases. At least one of your non-production environments should resemble the production environment. In many cases it's not possible to replicate production data, but you should try your best to make the non-production environments match the production environments as closely as possible.

Pre-production computing and the SRE

An SRE helps spin-up identical application serving environments by using automation and specialized tools. This is essential, as you can quickly spin up a non-production environment in a matter of seconds using scripts and tools developed by SREs.

A smart SRE treats configuration as code to ensure fast implementation of testing and deployment. Through the use of automated CI/CD pipelines, application releases and hot fixes can be made seamlessly.

Finally, by developing effective monitoring solutions, an SRE helps to ensure the reliability of a pre-production computing environment.

One of the closely related fields to pre-production computing is inner loop development.

Executing on inner loop development

Picture two loops, an inner loop and an outer loop, forming the DevOps loop. In the inner loop, you code, build, run, and debug. This cycle mostly happens in a developer's workstation or some other non-production environment.

Once the code is ready, it is moved to the outer loop, where the process starts with code review, build, deploy, integration tests, security and compliance, and finally pre-production release.

Many of the processes in the outer loop and inner loop are automated by the SRE.

Image by:

(Robert Kimani, CC BY-SA 40)

SRE and inner loop development

The SRE speeds up inner loop development by enabling fast, iterative development by providing tools for containerized deployment. Many of the tools an SRE develops revolve around container automation and container orchestration, using tools such as Podman, Docker, Kubernetes, or platforms like OpenShift.

An SRE also develops tools to help debug crashes with tools such as Java heap dump analysis tools, and Java thread dump analysis tools.

Overall value of SRE

By utilizing both systems engineering and software engineering, an SRE organization delivers impactful solutions. An SRE helps to implement DevSecOps where development, security, and operations intersect with a primary focus on automation.

SRE principles help maximize the function of pre-production environments by utilizing tools and processes that the SRE organizations deliver, so one can easily spin up non-production environment in a matter of seconds. An SRE organization enables efficient inner loop development by developing and providing necessary tools.

  • Improved end user experience: It's all about ensuring that the users of the applications and services, get the best experience as possible. This includes uptime of the applications or services. Applications should be up and running all the time and should be healthy.
  • Minimizes or eliminates outages: It's better for users and developers alike.
  • Automation: As the saying goes, you should always be trying to automate yourself out of the job that you are currently performing manually.
  • Scale: In the age of cloud-native applications and containerized services, massive automated scalability is critical for an SRE to scale up or down in a safe and fast manner.
  • Integrated: The principles and processes that the SRE organization embraces can be, and in many cases should be, extended to other parts of the organization, as with DevSecOps.

The SRE is a valuable component in an efficient organization. As demonstrated over the course of this series, the benefits of SRE affect many departments and processes.

Further reading

Below are some GitHub links to a few of my favorite SRE resources:

SRE is a valuable component in an efficient organization for software engineering, systems engineering, implementing DevSecOps, and more.

Image by:

Opensource.com

DevOps Careers What to read next What you need to know about site reliability engineering How SREs can achieve effective incident response A site reliability engineer's guide to change management How to use a circuit breaker pattern for site reliability engineering What is distributed consensus for site reliability engineering? This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages