Subscribe to feed
Updated: 1 hour 20 min ago

BASIC vs. FORTRAN 77: Comparing programming blasts from the past

Wed, 04/05/2023 - 15:00
BASIC vs. FORTRAN 77: Comparing programming blasts from the past Jim Hall Wed, 04/05/2023 - 03:00

If you grew up with computers in the 1970s and 1980s, as I did, you probably learned a common programming language for personal computers called BASIC, or the Beginner's All-purpose Symbolic Instruction Code. You could find BASIC implementations on every personal computer of the era, including the TRS-80, Apple II, and the IBM PC. Back then, I was a self-taught BASIC programmer, experimenting with AppleSoft BASIC on the Apple II before moving to GW-BASIC on the IBM PC and, later, to QuickBASIC on DOS.

But once upon a time, a popular language for scientific programming was FORTRAN, short for FORmula TRANslation. Although since the 1990 specification of the language, the name is more commonly stylized as "Fortran."

When I studied physics as a university undergraduate student in the early 1990s, I leveraged my experience in BASIC to learn FORTRAN 77. That was when I realized that BASIC derived many of its concepts from FORTRAN. To be clear, FORTRAN and BASIC differ in lots of other ways, but I found that knowing a little BASIC helped me to learn FORTRAN programming quickly.

I want to show some similarities between the two languages by writing the same program in both. I'll explore the FOR loop in BASIC and FORTRAN 77 by writing a sample program to add a list of numbers from 1 to 10.

Bywater BASIC

BASIC came in many flavors, depending on your computer, but the overall language remained the same. One version of BASIC that I like is Bywater BASIC, an open source implementation of BASIC available for different platforms, including Linux and DOS.

To use Bywater BASIC on FreeDOS, you must first install the package from the FreeDOS 1.3 Bonus CD. To run it, go into the C: directory and type bwbasic. This command starts the BASIC interpreter. You can enter your program from this prompt:


Bywater BASIC uses an older BASIC programming standard that requires you to write every program instruction with a line number. Think of a line number like an index. You can easily refer to any instruction in the program with line numbers. As you type the program into the Bywater BASIC interpreter, add the line number before each instruction:

bwBASIC: 10 print "Add the numbers from 1 to 10 ..." bwBASIC: 20 sum = 0 bwBASIC: 30 for i = 1 to 10 bwBASIC: 40 sum = sum + i bwBASIC: 50 next i bwBASIC: 60 print sum bwBASIC: 70 end

Use the list command to view the program you have entered into the interpreter:

bwBASIC: list 10 print "Add the numbers from 1 to 10 ..." 20 sum = 0 30 for i = 1 to 10 40 sum = sum + i 50 next i 60 print sum 70 end

This short program demonstrates the FOR loop in BASIC. FOR is the most fundamental loop construct in any programming language, allowing you to iterate over a set of values. The general syntax of the FOR loop in Bywater BASIC looks like this:

FOR var = start TO end

In this example program, the instruction for i = 1 to 10 starts a loop that iterates through the values 1 to 10. At each pass through the loop, the variable i is set to the new value.

In BASIC, all instructions up to the next instruction are executed as part of the FOR loop. Because you can put one FOR loop inside another, Bywater BASIC uses the syntax NEXT variable to specify which loop variable to iterate.

Type run at the prompt to execute the program:

bwBASIC: run Add the numbers from 1 to 10 ... 55

Bywater BASIC is called a BASIC interpreter because you can only run the program from inside the Bywater BASIC environment. This means the interpreter does all the hard work of interacting with the operating system, so your program doesn't need to do that on its own, with the trade-off that the program runs a little slower in the interpreted environment than it might if it were a compiled program.


Another popular implementation of BASIC is FreeBASIC, an open source BASIC compiler for several platforms, including Linux and DOS. To use FreeBASIC, you'll need to install the FreeBASIC package from the FreeDOS 1.3 Bonus CD, then change into the C: directory where you'll find the FreeBASIC programs.

FreeBASIC is a compiler, so you first create a source file with your program instructions, then run the compiler with the source code to create a program you can run. I wrote a similar version of the "add the numbers from 1 to 10" program as this BASIC file, which I saved as sum.bas:

dim sum as integer dim i as integer print "Add the numbers from 1 to 10 ..." sum = 0 for i = 1 to 10 sum = sum + i next print sum end

If you compare this code to the Bywater BASIC version of the program, you may notice that FreeBASIC doesn't require line numbers. FreeBASIC implements a more modern version of BASIC that makes it easier to write programs without keeping track of line numbers.

Another key difference is that you must define or declare your variables in your source code. Use the DIM instruction to declare a variable in FreeBASIC, such as dim sum as integer, to define an integer variable called sum.

Now you can compile the BASIC program using fbc on the command line:

C:\DEVEL\FBC> fbc sum.bas

If your code doesn't have any errors in it, the compiler generates a program that you can run. For example, my program is now called sum. Running my program adds up the numbers from 1 to 10:

C:\DEVEL\FBC> sum Add the numbers from 1 to 10 ... 55FORTRAN 77

The FORTRAN programming language is like a hybrid between old-style and modern BASIC. FORTRAN came before BASIC, and BASIC clearly took inspiration from FORTRAN, just as later versions of FORTRAN took cues from BASIC. You write FORTRAN programs as source code in a file but you don't use line numbers everywhere. However, FORTRAN 77 does use line numbers (called labels) for certain instructions, including the FOR loop. Although in FORTRAN 77, the FOR is actually called a DO loop, it does the same thing and has almost the same usage.

In FORTRAN 77, the DO loop syntax looks like this:

DO label var = start, end

This situation is one of the instances where you need a line number to indicate where the DO loop ends. You used a NEXT instruction in BASIC, but FORTRAN requires a line label instead. Typically, that line is a CONTINUE instruction.

Look at this sample FORTRAN program to see how to use DO to loop over a set of numbers. I saved this source file as sum.f:



In FORTRAN, every program needs to start with the PROGRAM instruction, with a name for the program. You might name this program SUM, but then you cannot use the variable SUM later in the program. When I learned FORTRAN, I borrowed from C programming and started all of my FORTRAN programs with PROGRAM MAIN, like the main() function in C programs, because I was unlikely to use a variable called MAIN.

The DO loop in FORTRAN is similar to the FOR loop in BASIC. It iterates over values from 1 to 10. The variable I gets the new value at each pass over the loop. This allows you to add each number from 1 to 10 and print the sum when you're done.

You can find FORTRAN compilers for every platform, including Linux and DOS. FreeDOS 1.3 includes the OpenWatcom FORTRAN compiler on the Bonus CD. On Linux, you may need to install a package to install GNU Fortran support in the GNU Compiler Collection (GCC). On Fedora Linux, you use the following command to add GNU Fortran support:

$ sudo dnf install gcc-gfortran

Then you can compile sum.f and run the program with these commands:

$ gfortran -o sum sum.f $ ./sum ADD THE NUMBERS FROM 1 TO 10 ... 55A few differences

I find that FORTRAN and BASIC are very similar, but with some differences. The core languages are different, but if you know a little of BASIC, you can learn FORTRAN. And if you know some FORTRAN, you can learn BASIC.

If you want to explore both of these languages, here are a few things to keep in mind:

  • FORTRAN 77 uses all uppercase, but later versions of FORTRAN allow mixed cases as long as you use the same capitalization for variables, functions, and subroutines. Most implementations of BASIC are case-insensitive, meaning you can freely mix uppercase and lowercase letters.

  • There are many different versions of BASIC, but they usually do the same thing. If you learn one BASIC implementation, you can easily learn how to use a different one. Watch for warnings or error messages from the BASIC interpreter or compiler, and explore the manual to find the differences.

  • Some BASIC implementations require line numbers, such as Bywater BASIC and GW-BASIC. More modern BASIC versions allow you to write programs without line numbers. FreeBASIC requires the -lang deprecated option to compile programs with line numbers.

I explore the FOR loop in BASIC and FORTRAN 77 by writing a sample program to add a list of numbers from 1 to 10.

Image by:

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Our favorite fonts for the Linux terminal

Wed, 04/05/2023 - 15:00
Our favorite fonts for the Linux terminal Jim Hall Wed, 04/05/2023 - 03:00

Terminal emulators came up as a topic for me recently, and it got me thinking: What's everyone's favorite terminal font?

So I asked contributors to share what font they like to use. Here are their answers.


I like to use a different font (VT323) in my GNOME Terminal than the font I use (Source Code Pro) in my programming editors or other apps that use a monospace font. I just like the look of the classic VT-style font.

Sometimes, I switch to the original IBM EGA font, because to my eye it looks really nice. But I associate EGA with DOS, and I associate VT323 with classic Unix terminals, so I use VT323 most of the time. Here's my screenshot of GNOME Terminal using VT323 as the monospace font:

Image by:

(Jim Hall CC BY-SA 4.0)

I set up the terminal using VT323 at 24 pt, which gives a nice big window. If I'm going to bring up a terminal window, I want to really use it to do real work, not just do one thing and exit. I'm probably going to stay in that terminal window for a while, so it should be big and easy to see. I also prefer 80x25, because I'm an old DOS command line guy and 25 lines looks "right" to my eyes:

Image by:

(Jim Hall CC BY-SA 4.0)

Jim Hall

Monospaced fonts

I don't know that I have a specific font that I use. I usually use either DejaVu or Liberation Mono. I like monospaced fonts because they're easier to read. Even then, I don't want the letters to be too close together. The main thing is being able to tell a small "L" from the number 1, Q from O, and so on. It's also nice to have all special characters stand out clearly.

I also like a good contrast between the font and background, so I set the background to black and characters to white.

Greg Pittman


I like to use monospaced fonts, particularly for the terminal and coding because they're easier to read. I've been using the Hack font family for years. It provides a nice monospace font combined with additional glyphs and Powerline characters that I can use to display status on the command line.

Image by:

(Ricardo Gerardi CC BY-SA 4.0)

Here's the font preview generated with Fontpreview tool.

Image by:

(Ricardo Gerardi CC BY-SA 4.0)

Ricardo Gerardi

Victor Mono

I've been using Victor Mono for both my terminal and IDE for a few years. It's perhaps a bit of an acquired taste, because the italic characters are in a monospace cursive script. I like this because code comments have a distinct look that's noticeably different from the rest of the code. Here's what it looks like in a terminal:

Image by:

(Chris Reynolds CC BY-SA 4.0)

I also use the Solarized Dark color scheme for both my IDE and the terminal. If I could, anything work or code related would all use the same color scheme, and the same fonts, so it all looks uniform. I've gone through various fonts over the years, but I always try to use the same thing in my terminal as in my IDE.

Chris Reynolds


At the moment, Monaco regular at 16 pt (so much better than 12 or 14 pt fonts).

Jenny Panighetti

Keeping it pretty in pink

This reminds me of the time I pranked the other devs on my team by making all of their terminals pink (I was the only woman). It was like a baby pink background with hot pink text.

I like to make my personal terminals pretty, but for work I leave it white on black and just make the font around 16 pt because I appreciate the larger font size.

I just leave my terminal set to SF Mono font, but I make it bolder and bump it up a few points so I can see it better. My favorite colors to use are pink or a nice aqua blue color. I also like the default style the terminal has, called "Novel".

Image by:

(RachieVee CC BY-SA 4.0)


Let's ask ChatGPT

For the heck of it, I decided to ask ChatGPT. Here was its response:

As a language model, I do not have personal preferences or the ability to have a favorite font. However, some popular terminal fonts include Monospace, Consolas, and Inconsolata. These fonts are popular because they are easy to read, have clear distinctions between different characters, and are highly legible even at small font sizes. Additionally, they are designed to work well with a variety of different operating systems and terminals, making them versatile choices for developers and users alike.

I generated this text in part with GPT-3, OpenAI's large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to my own liking and takes ultimate responsibility for the content of this publication.

Stephanie Brinley

Fantasque Sans Mono $ grep font ~/.Xdefaults URxvt*font: xft:FantasqueSansMono-Regular:pixelsize=12:antialias=true

I don't remember when I settled on Fantasque Sans Mono, but I guess it's been my default for the past 8 years now, both in Rxvt and Konsole. I don't know what font I'm using in my GNOME terminal. Probably whatever the default is on GNOME.

Seth Kenlon

Jetbrains Mono

Lately, I have Tilix set as my default terminal. My Tilix config has similar settings to what Jim Hall uses. The few differences are:

  • Cursor shape is underline instead of a block
  • Font is Jetbrains Mono Nerd Font Mono Medium 14
Image by:

(Alan Formy-Duval CC BY-SA 4.0)

 —Alan Formy-Duval

Here is an entirely subjective list of some fun fonts for your Linux console.

Image by:

Jamie Cox. Modified by CC BY 2.0.

Linux community What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 29648 points New Zealand (South Island)

Seth Kenlon is a UNIX geek, free culture advocate, independent multimedia artist, and D&D nerd. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project Slackermedia.

User Attributes Team Open Source Super Star Moderator's Choice Award 2011 Best Interview Award 2017 Author 100+ Contributions Club Columnist Contributor Club 4602 points United States

Alan has 20 years of IT experience, mostly in the Government and Financial sectors. He started as a Value Added Reseller before moving into Systems Engineering. Alan's background is in high-availability clustered apps. He wrote the 'Users and Groups' and 'Apache and the Web Stack' chapters in the Oracle Press/McGraw Hill 'Oracle Solaris 11 System Administration' book. He earned his Master of Science in Information Systems from George Mason University. Alan is a long-time proponent of Open Source Software.

| Follow AlanFormy_Duval User Attributes Correspondent Open Source Sensei People's Choice Award Author Comment Gardener Correspondent Apache DevOps Gamer Linux SysAdmin Geek Java Contributor Club 46 points

I’m a WordPress developer who has a keen interest in accessibility. #a11y I’m also a technical writer and I share my WordPress and coding experiences on my blog.

| Follow rachelrvasquez Open Enthusiast 1173 points Toronto

Ricardo Gerardi is Technical Community Advocate for Enable Sysadmin and Enable Architect. He was previously a principal consultant at Red Hat Canada, where he specialized in IT automation with Ansible and OpenShift.

He has been a Linux and open source enthusiast and contributor for over 20 years. He is currently interested in hacking stuff using the Go programming language, and he's the author of the book Powerful Command-Line Applications in Go. Ricardo also writes regularly about Linux, Vim, and command line tools for the community publications and Enable Sysadmin.

Ricardo enjoys spending time with his daughters, reading science fiction books, and playing video games.

| Follow ricardogerardi | Connect ricardogerardi Open Source Champion People's Choice Award Awesome Article Award 2019 Author DevOps Linux Developer Docker Fedora Geek Contributor Club 4595 points Louisville, KY

Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.

Open Source Sensei Emerging Contributor Award 2017 Awesome Article Award 2019 Author Python Contributor Club 17 points Salt Lake City

I am a Software Engineer for Pantheon. I’ve been building things with WordPress since around 2007. Before that, I played with various other open source platforms and the first incarnation of my blog was written entirely in HTML. Originally a designer, I graduated from the University of Redlands, Johnston Center, with a self-made degree titled Creative Arts in the Digital Revolution, which combined my loves for music, visual art, film and writing using computers as a medium for all of them.

I have worked as a freelance designer and developer, for a premium WordPress plugin developer, as well as in development agencies like WebDevStudios and Human Made. I've also authored many WordPress training courses (now retired) on the online learning platform, Pluralsight.

When I'm not working, I'm probably playing, planning or reading about Dungeons & Dragons and tabletop role-playing games.

| Follow jazzs3quence | Connect chrissreynolds Community Member 17 points Community Member 1 Comment Register or Login to post a comment. Fábio Emilio Costa | April 5, 2023 Register or Login to like

I'm partial on Victor Mono and Fantasque Sans Mono

How to resolve Git merge conflicts

Tue, 04/04/2023 - 15:00
How to resolve Git merge conflicts agantony Tue, 04/04/2023 - 03:00

Suppose you and I are working on the same file called index.html. I make some changes to the file, commit them, and push the changes to the remote Git repository. You also make some changes to the same file, commit them, and start pushing the changes to the same Git repository. However, Git detects a conflict because the changes you made conflict with the changes I made.

Here's how you can resolve the conflict:

  1. Fetch and merge the latest changes from the remote repository:

    $ git pull
  2. Identify the one or more conflicting files:

    $ git status
  3. Open the conflicting file using a text editor:

    $ vim index.html
  4. Resolve the conflict. The conflicting changes are marked by <<<<<<< HEAD and >>>>>>>. You need to choose which changes to keep and which to discard. Manually edit the file to combine the conflicting changes. 

    Here's an example:

    <<<<<<< HEAD Sample text 1 ======= Sample text 2 >>>>>>> feature-branch

    In this example, I changed the website heading to Sample text 1, while you have changed the heading to Sample text 2. Both changes have been added to the file. You can now decide which heading to keep or edit the file to combine the changes. In either case, remove the markers that indicate the beginning and end of the changes, leaving only the code you want:

    Sample text 2
  5. Save all of your changes, and close your editor.

  6. Add the file to the staging area:

    $ git add index.html
  7. Commit the changes:

    $ git commit -m "Updated h1 in index.html"

    This command commits the changes with the message Resolved merge conflict.

  8. Push the changes to the remote repository:

    $ git push

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Resolution

 Merge conflicts are a good reason to focus your changes on code. The more you change in a file, the greater the potential for conflict. You should make more commits with fewer changes each. You should avoid making a monolithic change that combines multiple feature enhancements or bug fixes into one. Your project manager will thank you, too, because commits with clear intent are easier to track. A Git merge conflict may seem intimidating at first, but now that you know how to do it, you'll see that it's easily resolved.

Don't panic when you encounter a merge conflict. With a little expert negotiation, you can resolve any conflict.

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. David C. | April 4, 2023 Register or Login to like

I always recommend that people add the following to their ~/.gitconfig file:

conflictstyle = diff3

With this in place, when there are conflicts, you won't just see the two sources (your text and the conflicting text from the file being merged in), but you'll also see a block of text representing the common-ancestor text for the region in conflict.

This often provides very useful information to help resolve the conflict.

Handle any type of document with this open source tool

Mon, 04/03/2023 - 15:00
Handle any type of document with this open source tool hej Mon, 04/03/2023 - 03:00

Collabora Online supports all sorts of files and formats. How does this open source office suite do in terms of interoperability, though? This article takes a closer look at Collabora Online's ability to exchange complex text documents and spreadsheets with different office suites such as Microsoft 365 and Google Workspace.

Collabora Online is an open source office suite for the cloud or on-premises that protects your privacy and allows you to keep full control of your data. The software is developed by Collabora Productivity Ltd, in Cambridge, with its team working in locations worldwide. Collabora Online is based on LibreOffice Technology and is primarily licensed under the Mozilla Public License 2.0.

Collabora Online works in any modern web browser, requiring no additional plug-ins or add-ons. It features a complete cloud-based office suite, including a word processor (Writer), spreadsheet program (Calc), presentation software (Impress), and an application for designing vector graphics (Draw).

This article looks at some of the new Collabora Online's interoperability features, including macros, dynamic font loading, and Sparklines support for the spreadsheet application. These features extend the existing great handling of Microsoft file formats.

What is interoperability, and why does it matter?

In general, interoperability refers to the ability of different devices or applications to work together and exchange data seamlessly. In the context of office suites, interoperability is mostly about file formats. Users should be able to open, edit, and save .doc and .docx, .xls and .xlsx, .odt, and .ods documents, regardless of whether they were created with Microsoft Word, Apple iWork, or LibreOffice.

The same is true for online office suites. By ensuring that files can be exchanged between Microsoft 365, Google Workspace, and Collabora Online, interoperability helps to increase productivity and facilitate collaboration. All online office suites can save files in various formats. They can also import and export documents, spreadsheets, and presentations originally created in other office suites.

Managing macros and ensuring smooth document handling

Something that often causes problems are documents with macros. They're usually developed in a programming language specific to a particular application. While recording and editing macros in Google Sheets is possible, macros implemented in Visual Basic for Applications (VBA) in Microsoft Office cannot be converted and must be re-created with Google Apps Script. Opening a Word document with VBA macros produces errors and informs users that the macros will be ignored or disabled.

Collabora Online supports macros and runs them on the server side inside a container. The feature is disabled by default, and admins must explicitly activate it in the coolwsd.xml configuration file. After that, users can choose to allow macros when loading a document. There are several limitations, though. For example, it's not possible to access database sources, access other (external) documents, call external programs, use control shapes, and so on. Over the years, the amount of code and objects supported by Collabora Online has increased significantly, thanks to an active community and contributions from customers and partners.

Collabora Online: dynamic font loading

Another critical aspect of interoperability in office suites is fonts. Working with documents containing fonts that aren't available on a particular platform can result in errors, unexpected formatting changes, and even the complete loss of content.

Microsoft Office documents often use default fonts that aren't available in Google Workspace or Collabora Online. To address this issue, the office suites often suggest substituting missing fonts. That's often useful, but it sometimes leads to a bad result.

As of version 22.05.7 (released in November 2022), Collabora Online can list missing fonts and suggest substitutions. It can also download the necessary fonts and add them to the server. Everything happens dynamically, without downtime. New fonts become available in editing sessions within minutes for optimal interoperability.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

To achieve that, information about missing fonts is tracked down by an API while the document is being rendered. A JSON file stores the list of fonts that need to be added. The coolwsd.xml file (the server-side settings) points to that JSON file. It checks for modifications once every minute and downloads the missing fonts.

Exploring Sparklines: displaying data trends in spreadsheets

Sparklines are tiny charts that fit inside a single cell of your worksheet, visualizing trends in data. Those miniature charts come in different styles, including lines, bars, and columns. Sparklines also support different colors and horizontal/vertical axis. Unlike larger charts that show as much data as possible and are separated from the text flow, Sparklines are reduced to the core values and typically placed next to or behind the data itself in the same cell. Sparklines are usually defined for one cell, but it's possible to group multiple Sparklines that share the same data range and properties for rendering.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

Sparklines are a compact reference and offer a quick way to illustrate trends, patterns, statistical anomalies, increases, and decreases, all while avoiding the complexity of a full chart. Here are some different Sparkline types:

  • Line Chart: Connects points by line segments from left to right and is particularly useful for displaying data that changes over a certain time.
  • Bar Chart: A graphical representation of data using horizontally aligned bars, often used to compare numerical data.
  • Column Chart: Ideal for comparing a series of values against each other; the columns are vertical, and their lengths indicate the relative size/value of the data. Column charts are often used to represent data of distinct categories or groups.

To create a Sparkline, you first define an input data range for the function (two or more cells in a column or a row). You also decide on the cell where you want the Sparkline to appear. In most spreadsheet applications, you right-click the mini chart to adjust its properties, select the chart type, and choose the colors. Collabora Online offers a separate dialog box for this, making it easy and convenient to change the style of the miniature charts.

Exchanging documents with Sparklines between the three online offices is possible without losing the graphs and their formatting. If you want to share spreadsheets between Microsoft 365, Google Workspace and Collabora Online, make sure to use the Microsoft format .xlsx for import and export, since Google Sheets doesn't handle .ods files very well.

Document exchange is easy

Collabora Online provides several new interoperability features, making exchanging documents with other office suites easy. Macro support, dynamic font loading, and Sparklines ensure seamless document handling, avoiding unexpected formatting changes. Use Collabora Online to unify and simplify your office work.

Explore Collabora Online's interoperability and make documents and spreadsheets compatible across all office suites.

Business Alternatives Tools LibreOffice What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source text editing for your website with CKEditor

Mon, 04/03/2023 - 15:00
Open source text editing for your website with CKEditor AnnaTomanek Mon, 04/03/2023 - 03:00

Most applications allow users to create and add some textual content, such as a comment, a chat message, an article, a product description, or a legal document. Today, plain text isn't enough. Users format text, insert images and memes, videos, tables, and create lists and links. A developer can probably craft this rich content in HTML by hand, but there's a high chance that your users would appreciate some help.

WYSIWYG ("What You See Is What You Get") editing allows you to see and edit rich text content in a form that resembles how it's displayed to end users. A WYSIWYG editor provides a UI that makes it easy to format the text and incorporate various elements such as images, links, or tables into your digital content.

If you want to supply this sort of functionality in your app, you can choose to:

  1. Build it yourself
  2. Find a WYSIWYG editor component you can reuse

Each option has advantages and disadvantages.

Build it yourself...

If your use case is simple and you don't need many features, building your own component may feel like a way to go. WYSIWYG editing sounds easy.

It's not.

Even when you're starting simple, requirements tend to grow over time. Content creation is such a widely accepted paradigm that users now expect it to work in your application similarly to anywhere else. They want to see a familiar feature set and UX patterns. Creating and maintaining this takes time and effort that could otherwise be spent on developing your app.

There's a valid reason why content editing functionality is often outsourced to an external library, even in big projects with considerable engineering teams. It requires deep domain knowledge to do it well.

Assume you decide to integrate a ready-to-use editing component or framework. There are plenty of rich text editing solutions on the market. The differences between them usually revolve around the feature set, UX, integrations with various technologies, extensibility, licensing, popularity, project health, and support options. So, similar to any external library, when you're choosing an editor, it's best to consider your use case so you can avoid a costly mistake and further migration in the future.

Sure, there are some challenges you face when integrating a WYSIWYG editing component, but they're easy to resolve when you're using the popular open source CKEditor 5.

Or choose CKEditor

CKEditor has been around for 20 years(!) as an open source project. It's backed by CKSource, a company of about 100 people who work, day in and day out, on improving the editor. The latest version, CKEditor 5, is a modern, flexible, extensible, and customizable component written in pure TypeScript and JavaScript. It's built on top of a robust collaboration-ready editing framework with model-view-controller (MVC) architecture, a custom data model, and virtual DOM.

Running a simple editor in 3 steps with CKEditor 5

Here are the basics of integrating CKEditor with a simple website.

To get up and running, load the editor script from the CDN, and call its create() method to create the editor:

  1. In an HTML page, add an element to serve as a placeholder for a CKEditor 5 instance:

  2. Load the editor build script (here, a so-called classic editor build is loaded from the CDN):

  3. Call the ClassicEditor.create() method to display the editor.

    ClassicEditor .create( document.querySelector( '#editor' ) ) .catch( error => { console.error( error ); } );

And that's it. A full web page with an embedded CKEditor 5:

CKEditor 5 – Classic editor CKEditor 5 - cool, eh?

This is some sample content for my new WYSIWYG editor.

ClassicEditor .create( document.querySelector( '#editor' ) ) .catch( error => { console.error( error ); } );

Open it in your browser to test the WYSIWYG editor:

Image by:

(Anna Tomanek, CC BY-SA 4.0)

Advanced WYSIWYG editing

Yes, there are only three steps, and it's running. But this simple example also uncovers some typical challenges faced by an integrator of an external framework.

  • It's just a simple HTML website that misses the entire context of your app.
  • The UI doesn't match your design system.
  • Your app is written in React/Angular/Vue, or something else.
  • You don't want to serve CDN scripts, and prefer to self-host.
  • The feature set of the editor isn't what you need.
  • Also, some of your users prefer Markdown to HTML or WYSIWYG "magic".

So how do you resolve some of these issues? Most editor components allow for some degree of customization that's still cheaper than writing a custom script to properly handle user content creation.

CKEditor 5 uses a plugin-based architecture, which provides excellent customizability and extensibility. By putting in some effort, you can benefit from a stable, popular, and reliable solution set up to look and work exactly as you want. Drupal 10, for example, built CKEditor 5 into its core and enriched it with some typical CMS functionality like a media library through custom plugins.

What are some ways you can take advantage of all these customization options? Here are five that showcase its versatility:

1. Flexible UI options

The first thing to notice when looking at a component to integrate with your application is its user interface. The CKEditor 5 framework delivers a few different UI types. For example:

  • Classic Editor, used in the first example, offers a toolbar with an editing area placed in a specific position on the page. The toolbar stays visible when you scroll down the page, and the editor automatically grows with the content.
  • The Document editor provides a similar editing experience to applications such as Microsoft Word or Google Docs, with a UI that resembles a paper document.
  • If you're looking for distraction-free editing, where the content is placed in its target location on the web page without the editor UI getting in your way, you have a few options. The Inline Editor, Balloon Editor, and Balloon Block Editor all come with different types of floating toolbars that appear as needed.

Besides that, you can play with the toolbar to move the buttons around, group them into drop-downs, use a multi-line toolbar, or hide some less-frequently needed buttons in the "three dots" or "more options" menu. Or, if you wish, move the entire toolbar to the bottom.

It may also happen that you prefer to go the headless route. Plenty of projects use the powerful editing engine of CKEditor 5 but coupled with their own UI created in, for example, React. The most notable example is Microsoft Teams, believe it or not. Yes, it's using CKEditor 5.

Image by:

(Anna Tomanek, CC BY-SA 4.0)

2. Choose a full-featured editor or a lightweight one

In digital content editing, there's no "one size fits all" solution. Depending on the type of content you need, the feature set differs. CKEditor 5 has a plugin-based architecture and features are implemented in a highly decoupled and granular way.

It's easy to get lost in all the possible features, sub-features, and configuration options without some guidance. Here are some useful resources to help you build the editor that's a perfect match for your use case:

  • Try the feature-rich editor demo to test some of the most popular features.
  • Look at some other editor setups on the demo page. You can find the complete source code of each demo in the ckeditor5-demos repository.
  • The entire Features section of the documentation explains all CKEditor 5 features, possible configuration options, toolbar buttons, and API.
  • CKEditor 5 online builder is a quick and easy solution to build your custom editor in 5 steps. It allows you to choose the UI type, plugins, toolbar setup, and UI language and then download a ready-to-use editor bundle.
Image by:

(Anna Tomanek, CC BY-SA 4.0)

3. Integrations with JavaScript frameworks

The online builder and demos are a fun playground if you want to test a few solutions in a no-code fashion, but you may need tighter integration. You can also install CKEditor 5 with npm, or bundle it with your app using webpack or Vite. CKEditor 5 is written in pure TypeScript and JavaScript, so it's compatible with every JavaScript framework available.

Four official integrations are available for the most popular frameworks:

  • Angular
  • React
  • Vue.js v2
  • Vue.js v3

For example, to set up the Classic Editor (used in my first example) in React, you can use this one-liner:

npx create-react-app ckeditor5-classic-demo \ --template @ckeditor/ckeditor5-classic4. Markdown and HTML

For some developers, Markdown might feel like second nature. It has its limitations, though. For example, support for tables is quite basic. Still, for many users, crafting content in Markdown is much more efficient than using the editor UI to format it.

And here's the fun part. Thanks to CKEditor's autoformatting, you can use Markdown syntax when writing, and the editor formats the content as you type. This is a nice compromise for covering the needs of both power users and users unfamiliar with Markdown and preferring to create rich text using the WYSIWYG UI.

5. Different input and output

Autoformatting is just one aspect of Markdown support in CKEditor 5. Another is that you can configure the editor to treat Markdown as its input and output format instead of HTML.

Image by:

(Anna Tomanek, CC BY-SA 4.0)

Here's another challenge. If you allow the users to input content in your app, they can create it there but also paste it from different sources (other websites, Microsoft Word, Google Docs). They naturally expect the structure and formatting of pasted text to be preserved. This may result in some nasty styles and unwanted elements making their way to your content, and you have to clean up. Instead of trying to reconcile these two potential clashes of interest by yourself, it's better to rely on a good editor that solves this problem for you.

In the case of CKEditor 5, the Paste from Office feature provides great support for pasting content from Word or Google Docs, preserving the structure, and translating the formatting into semantic content.

The default CKEditor 5 settings also prevent users from adding or pasting elements and styles unsupported by the feature set chosen for your editor. If you, as an integrator, configure the editor to support just links, lists, and basic styles such as bold or italic, then the user can't add tables, images, or YouTube videos.

Then again, if you would like your editor to accept content that's not covered by your feature set or even not supported by any existing CKEditor 5 features, you can achieve that thanks to the so-called General HTML support functionality. This is useful for loading pre-existing content created in other tools, and can make your migration to CKEditor 5 easier.

Building custom plugins

No matter how great a ready-made solution is, you may still need to customize it even more. After all, thanks to reusing an advanced WYSIWYG editing component, you've saved yourself plenty of time and coding effort. You may want to get your hands dirty and polish your solution a bit, for example, by creating custom plugins.

Here are some useful resources to get you started:

Image by:

(Anna Tomanek, CC BY-SA 4.0)

How to get CKEditor

CKEditor 5 is licensed under the terms of GPL 2 or later, but if you are running an open source project under an OSI-approved license incompatible with GPL, the CKEditor team is happy to support you with a no-cost license.

CKEditor 5 is a powerful modern rich text editor framework that allows developers to build upon an open source, tested, and reliable editor. Start-ups, leading brands, and software providers use it to improve both their content creation and content production workflows. If your users value those benefits, check it out!

Use the power of JavaScript and CKEditor to bring rich text editing to your website.

Text editors Web development Drupal JavaScript What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. madtom1999 | April 3, 2023 Register or Login to like

I did write a PHP web backend for (F)CK editor as it was which stored the files in a DB along with security functions to allow who could read/edit etc and allowed you (with the right editor) to modify code etc only to dig it out recently to find PHP has moved on and I've not got the code working again yet. But with something like that you never need touch a desktop again.

Using mocks in Python

Sat, 04/01/2023 - 15:00
Using mocks in Python Moshe Zadka Sat, 04/01/2023 - 03:00

April 1st is all about fake stories and pretending. This makes it the perfect day to talk about mocking.

Sometimes, using real objects is hard, ill-advised, or complicated. For example, a requests.Session connects to real websites. Using it in your unittests invites a…lot…of problems.

Basic mocking in Python

"Mocks" are a unittest concept. They produce objects that are substitutes for the real ones.

from unittest import mock

There's a whole cottage industry that will explain that "mock", "fake", and "stub" are all subtly different. In this article, I use the terms interchangeably.

regular = mock.MagicMock() def do_something(o): return o.something(5) do_something(regular)

This code produces:

Mocks have all the methods. The methods usually return another Mock. This can be changed by assigning it to return_value.

For example, suppose you want to call the following function:

def do_something(o): return o.something() + 1

It requires something which has the .something() method. Luckily, mock objects have it:

obj = mock.MagicMock(name="an object") obj.something.return_value = 2 print(do_something(obj))

The answer:


It is also possible to override the "magic methods":

a = mock.MagicMock() a.__str__.return_value = "an a" print(str(a))

The answer:

an aThe spec

Make sure that a mock does not have "extra" methods or attributes by using a spec. For example, here's some code that should fail:

import pathlib def bad_pathlib_usage(path): ## TYPO: missing underscore path.writetext("hello") dummy_path = mock.MagicMock(spec=pathlib.Path) try: bad_pathlib_usage(dummy_path) except Exception as exc: print("Failed!", repr(exc))

The result:

Failed! AttributeError("Mock object has no attribute 'writetext'")Mock side effect

Sometimes, having a MagicMock that returns the same thing every time isn't quite everything you need it to be. For example, sys.stdin.readline() usually returns different values, not the same value throughout the test.

The property side_effect allows controlling what a magic mock returns on a more detailed level than using return_value.


One of the things that can be assigned to side_effect is an iterable, such as a sequence or a generator.

This is a powerful feature. It allows controlling each call's return value, with little code.

different_things = mock.MagicMock() different_things.side_effect = [1, 2, 3] print(different_things()) print(different_things()) print(different_things())

The output:

1 2 3

A more realistic example is when simulating file input. In this case, I want to be able to control what readline returns each time to pretend it's file input:

def parse_three_lines(fpin): line = fpin.readline() name, value = line.split() modifier = fpin.readline().strip() extra = fpin.readline().strip() return {name: f"{value}/{modifier}+{extra}"} from io import TextIOBase filelike = mock.MagicMock(spec=TextIOBase) filelike.readline.side_effect = [ "thing important\n", "a-little\n", "to-some-people\n" ] value = parse_three_lines(filelike) print(value)

The result:

{'thing': 'important/a-little+to-some-people'}Exception

Another thing that's possible is assigning an exception to the side_effect attribute. This causes the call to raise the exception you assigned. Using this feature allows simulating edge conditions in the environment, usually precisely the ones that:

  • You care about
  • Are hard to simulate realistically

One popular case is network issues. As per Murphy's law, they always happen at 4 AM, causing a pager to go off, and never at 10 AM when you're sitting at your desk. The following is based on real code I wrote to test a network service.

In this simplified example, the code returns the length of the response line, or a negative number if a timeout has been reached. The number is different based on when in the protocol negotiation this has been reached. This allows the code to distinguish "connection timeout" from "response timeout", for example.

Testing this code against a real server is hard. Servers try hard to avoid outages! You could fork the server's C code and add some chaos or you can just use side_effect and mock:

import socket def careful_reader(sock): sock.settimeout(5) try: sock.connect(("", 8451)) except socket.timeout: return -1 try: sock.sendall(b"DO THING\n") except socket.timeout: return -2 fpin = sock.makefile() try: line = fpin.readline() except socket.timeout: return -3 return len(line.strip()) from io import TextIOBase from unittest import mock sock = mock.MagicMock(spec=socket.socket) sock.connect.side_effect = socket.timeout("too long") print(careful_reader(sock))

The result is a failure, which in this case means a successful test:


With careful side effects, you can get to each of the return values. For example:

sock = mock.MagicMock(spec=socket.socket) sock.sendall.side_effect = socket.timeout("too long") print(careful_reader(sock))

The result:


The previous example is simplified. Real network service test code must verify that the results it got were correct to validate that the server works correctly. This means doing a synthetic request and looking for a correct result. The mock object has to emulate that. It has to perform some computation on the inputs.

Trying to test such code without performing any computation is difficult. The tests tend to be too insensitive or too "flakey".

  • An insensitive test is one that does not fail in the presence of bugs.
  • A flakey test is one that sometimes fails, even when the code is correct.

Here, my code is incorrect. The insensitive test does not catch it, while the flakey test would fail even if it was fixed!

import socket import random def yolo_reader(sock): sock.settimeout(5) sock.connect(("", 8451)) fpin = sock.makefile() order = [0, 1] random.shuffle(order) while order: if order.pop() == 0: sock.sendall(b"GET KEY\n") key = fpin.readline().strip() else: sock.sendall(b"GET VALUE\n") value = fpin.readline().strip() return {value: key} ## Woops bug, should be {key: value}

The following would be too "insensitive", not detecting the bug:

sock = mock.MagicMock(spec=socket.socket) sock.makefile.return_value.readline.return_value = "interesting\n" assert yolo_reader(sock) == {"interesting": "interesting"}

The following would be too "flakey," detecting the bug even if it's not there, sometimes:

for i in range(10): sock = mock.MagicMock(spec=socket.socket) sock.makefile.return_value.readline.side_effect = ["key\n", "value\n"] if yolo_reader(sock) != {"key": "value"}: print(i, end=" ")

For example:

3 6 7 9

The final option of getting results from a mock object is to assign a callable object to side_effect. This calls side_effect to simply call it. Why not just assign a callable object directly to the attribute? Have patience, I'll get to that in the next part!

In this example, my callable object (just a function) assigns a return_value to the attribute of another object. This isn't that uncommon. I'm simulating the environment, and in a real environment, poking one thing often affects other things.

sock = mock.MagicMock(spec=socket.socket) def sendall(data): cmd, name = data.decode("ascii").split() if name == "KEY": sock.makefile.return_value.readline.return_value = "key\n" elif name == "VALUE": sock.makefile.return_value.readline.return_value = "value\n" else: raise ValueError("got bad command", name) sock.sendall.side_effect = sendall print(yolo_reader(sock), dict(key="value"))

The result:

{'value': 'key'} {'key': 'value'}Mock call arguments: x-ray for code

When writing a unit test, you are "away" from the code but trying to peer into its guts to see how it behaves. The Mock object is your sneaky spy. After it gets into the production code, it records everything faithfully. This is how you can find what your code does and whether it's the right thing.

Call counts

The simplest thing is to just make sure that the code is called the expected number of times. The .call_count attribute is exactly what counts that.

def get_values(names, client): ret_value = [] cache = {} for name in names: # name = name.lower() if name not in cache: value = client.get(f"{name}").json()['args']['name'] cache[name] = value ret_value.append(cache[name]) return ret_value client = mock.MagicMock() client.get.return_value.json.return_value = dict(args=dict(name="something")) result = get_values(['one', 'One'], client) print(result) print("call count", client.get.call_count)

The results:

['something', 'something'] call count 2

One benefit of checking .call_count >= 1 as opposed to checking .called is that it is more resistant to silly typos.

def call_function(func): print("I'm going to call the function, really!") if False: func() print("I just called the function") func = mock.MagicMock() call_function(func) print(func.callled) # TYPO -- Extra "l" I'm going to call the function, really! I just called the function

Using spec diligently can prevent that. However, spec is not recursive. Even if the original mock object has a spec, rare is the test that makes sure that every single attribute it has also has a spec. However, using .call_count instead of .called is a simple hack that completely eliminates the chance to make this error.

Call arguments

In the next example, I ensure the code calls the method with the correct arguments. When automating data center manipulations, it's important to get things right. As they say, "To err is human, but to destroy an entire data center requires a robot with a bug."

We want to make sure our Paramiko-based automation correctly gets the sizes of files, even when the file names have spaces in them.

def get_remote_file_size(client, fname): client.connect('') stdin, stdout, stderr = client.exec_command(f"ls -l {fname}") stdin.close() results = errors = stdout.close() stderr.close() if errors != '': raise ValueError("problem with command", errors) return int(results.split()[4]) fname = "a file" client = mock.MagicMock() client.exec_command.return_value = [mock.MagicMock(name=str(i)) for i in range(3)] client.exec_command.return_value[1].read.return_value = f"""\ -rw-rw-r-- 1 user user 123 Jul 18 20:25 {fname} """ client.exec_command.return_value[2].read.return_value = "" result = get_remote_file_size(client, fname) assert result == 123 [args], kwargs = client.exec_command.call_args import shlex print(shlex.split(args))

The results:

['ls', '-l', 'a', 'file']

Woops! That's not the right command. Good thing you checked the arguments.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Deep dive into mocks

Mocks have a lot of power. Like any powerful tool, using it improperly is a fast way to get into a big mess. But properly using the .return_value, .side_effect, and the various .call* properties, it's possible to write the best sort of unit tests.

A good unit test is one that:

  • Fails in the presence of incorrect code
  • Passes in the presence of correct code

"Quality" is not binary. It exists on a spectrum. The badness of a unit test is determined by:

  • How many errors it lets pass. That's a "missing alarm" or "false negative". If you're a statistician, it's a "type 2 error".
  • How many correct code changes it fails. That's a "false alarm" or "false positive". If you're a statistician, it's a "type 1 errors".

When using a mock, take the time and think about both metrics to evaluate whether this mock and this unit test, will help or hinder you.

Test your code safely with mocks.

Image by:

kris krüg

Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Measure pi with a Raspberry Pi

Fri, 03/31/2023 - 15:00
Measure pi with a Raspberry Pi Jim Hall Fri, 03/31/2023 - 03:00

March 14th is celebrated around the world as Pi Day. Many people celebrate Pi Day by measuring pi with objects found around the house. I wanted to do something similar for this year's Pi Day using my Raspberry Pi 3B. Read on to learn how I measured pi using my Raspberry Pi.

What you'll need:

  • Raspberry Pi single-board computer
  • Graph paper
  • Ruler with mm and cm measurements
  • Pen
1. Draw a circle

Pi is the ratio of the circumference of a circle to its diameter. To calculate pi, we need to measure both the circumference and diameter of a perfectly drawn circle. Fortunately, the Raspberry Pi motherboard has mounting holes that are big enough to fit a pencil or pen. I stuck a pin through one Pi board mounting hole, careful to land the pin on the intersection of two lines on a piece of graph paper.

Holding the pin still, I inserted a pen in the opposite mounting hole and drew a circle by moving the pen around the pin. The solder points on the underside of the Raspberry Pi motherboard can catch on the paper, but you can draw a good circle if you are careful.

Image by:

(Jim Hall, CC BY-SA 4.0)

2. Divide the circle into segments

Divide the circle in half by drawing a vertical line through the center of the circle and again into quarters by drawing a horizontal line through the circle. When I drew my circle, I placed the pin exactly on the intersection of two lines on the graph paper, which makes finding the vertical and horizontal center lines easy to find. You can create an "eighth" slice by drawing a line across the diagonal.

Image by:

(Jim Hall, CC BY-SA 4.0)

Further divisions are an exercise with the ruler. I used the ruler to find the midpoint of any two intersections of the "quarter-wedge" and the "one-eighth wedge" to make a 1/16 wedge. You can use the same method to make smaller and smaller slices that are 1/32 and 1/64 of a circle. By being very careful, I was also able to measure a very narrow wedge at 1/128 of a circle:

Image by:

(Jim Hall, CC BY-SA 4.0)

3. Estimate the circumference

My smallest wedge is 1/128 of a circle. With such a small slice, the outer arc of the wedge is so small that we can approximate it with a straight line. This will not actually be 1/128 of the circle's circumference, but it will be close enough that we can use it as a good estimate.

Image by:

(Jim Hall, CC BY-SA 4.0)

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

Using the mm measurement on my ruler, I measured the outer arc of my 1/128 wedge at 3.8mm. With that, I can estimate the circumference of my circle as 3.8mm times 128, or 486.4mm. To convert to centimeters, divide by ten: 48.64cm.

4. Calculate pi

The value of pi is the ratio of the circumference of a circle to its diameter. We estimated the circumference in step 3. Measuring the diameter is a simple exercise of using the ruler to measure across the circle. My circle is 15.4cm.

Now that we know the circumference and diameter, we can calculate pi as 48.64 divided by 15.4, which is 3.158. That's not too far off from the actual value of pi, at 3.141.

Measuring pi is a fun math exercise! Math fans of all ages can use simple tools such as graph paper, pen, and ruler to measure pi on their own. Use your Raspberry Pi in a fun, new way to draw a circle and measure pi independently. This is an estimate since we are approximating a 1/128 arc on a circle as a straight line, but this gets us close enough without too much effort.

Produce a near-perfect circle with your Raspberry Pi.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Raspberry Pi Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 reasons my Linux team uses Penpot

Thu, 03/30/2023 - 15:00
3 reasons my Linux team uses Penpot ekidney Thu, 03/30/2023 - 03:00

Working with Fedora exposes you to a lot of different open source software. A major Fedora website revamp started over a year ago, with the goal of improving design aesthetics, creating a style guide, planning the website strategy, and choosing the tech stack for delivering the Fedora Linux offerings website. From a design perspective, the team needed a tool to create mock-ups, a place to hold the asset libraries, and something suitable to hand off to developers once complete.

Choosing Penpot

Figma is a popular interface designing tool recommended by many, but it wasn't deemed suitable because the company had recently imposed restrictions on their free plan. This concern arose before Adobe acquired Figma, so the decision not to use it was even more significant in retrospect!

The team looked into Penpot and found that it matched everyone's requirements. Penpot is the first open source design and prototyping platform for cross-domain teams. A team within Kaleidos creates Penpot. Kaleidos is a technology company started in 2011 that fully focuses on open source projects.

There are three ways the Fedora Websites and Apps team uses Penpot:

  1. Wireframes and mock-ups
  2. UX testing and feedback
  3. Collabloration

I expand on these uses below. While the example discusses the Fedora Project, Penpot offers benefits to any open source community.

1. Wireframes and mock-ups

Drafting webpage designs is the primary way our team uses Penpot. Drafting enables quick collaboration and lessens communication issues between contributors. Developers and designers can collaborate freely and in the same space.

Community feedback is important. It can be a bit difficult to share mock-ups properly. Penpot is web-based and easily accessible on any platform. When entering View Mode on a prototype, the tool generates a shareable link. You can also modify the permissions or destroy the link if you no longer want it shared.

Image by:

(Emma Kidney, CC BY-SA 4.0)

2. UX testing and feedback

This revamp works closely with the Fedora community. By running usability testing sessions on prototypes and sharing design progress, we use Penpot to keep the community involved every step of the way.

3. Collaboration

During the revamp, our development and design teams used Penpot to generate ideas, organize meetings, and test new concepts visually.

Our teams used Penpot as a whiteboard in early planning sessions and enabled the developers to contribute ideas asynchronously while engaging in the discussion. This method reduced stress, made sure everyone's ideas could be heard, helped us see patterns, and mediated disagreements for a good compromise. Penpot helped create a sense of understanding between everyone.

The team used Penpot as a source of assets. Uses can store elements and other content in an asset library so that one can use them repeatedly. Penpot stores components, graphics, typographies, color palettes, and more.

Image by:

(Emma Kidney, CC BY-SA 4.0)

Sharing these libraries enables the whole team to access them. This can be helpful when working with a team that regularly accesses the same source files. If a new member joins, all assets they need to start building mock-ups for the project would be readily available. Users can export these assets directly from the Penpot file.

Image by:

(Emma Kidney, CC BY-SA 4.0)

Developers can view the prototype in full on any browser. This capability makes building the website easier as you can code side by side with the prototype. If a designer is working on the file at the same time, changes they make can be seen by refreshing in View Mode or in real-time if in the actual file.

Image by:

(Emma Kidney, CC BY-SA 4.0)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Open source values

Penpot aligns with the Fedora Project's "Four Foundations" of Freedom, Friends, Features, and First. As you review these values, consider how the tool might align with your own open source initiative.


We choose open source and free alternatives to proprietary code and content and limit the effects of proprietary code on and within the Project. Penpot is the first open source design and prototyping platform. Penpot is web-based, independent of operating systems, and works with open web standards. This ensures compatibility with web browsers and external applications like Inkscape.


My community consists of people from all walks of life working together to advance free software. Penpot's mission is similar. Its goal is to provide an open source and open standards tool to bring collaboration between designers and developers to the next level. Using Penpot has allowed for a smooth handoff to developers and has allowed us to work productively together. There's no back-and-forth looking for files or assets, as everything they need is in the Penpot file.


Fedora cares about excellent software. Its feature development is always done openly and transparently, and it encourages participation. Anyone can start working on any issue or as part of any team that interests them. Penpot shares this ethos. Anyone can collaborate! The code and a contributor guide are available on the project's Git repository.


Fedora adopts a strategy of advancing free software through consistent forward momentum. This approach usually follows a "release early, release often" workflow. Penpot also updates frequently. It publishes a Dev Diary blog to the community, highlighting the work that has been done. It states on its website, "We also have this sense of urgency, we need to act fast, there's too much at stake."

Wrap up

The project is coming close to completion, with the first deadline aligning with the release of Fedora Linux 38. Penpot has proven to be a valuable tool and is expanding the resources available to Open Source Design enthusiasts. With the platform celebrating its official launch recently, it's exciting to see what's next.

Penpot has changed the way the our team works. What can it do for your organization and community?

This article has been adapted from the talk: Mock-ups and Motions—How the Fedora Design Team uses Penpot by Ashlyn Knox and Emma Kidney, given at the Creative Freedom Summit. A recording of the talk is available to watch on PeerTube.

Penpot is an open source design workspace for designers and developers.

Image by:

Linux Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Kyle R. Conway | March 31, 2023 Register or Login to like

Thanks for the write-up! Your talk at the Creative Freedom Summit convinced me I could move some co-workers to this and embrace it more. It's worked great!

6 steps to reduce the carbon footprint of your website

Wed, 03/29/2023 - 15:00
6 steps to reduce the carbon footprint of your website mike_gifford Wed, 03/29/2023 - 03:00

According to Mozilla, Information Communications Technology (ICT) is expected to emit more carbon by 2025 than any single country besides China, India, and the United States. We tend not to think of the physical scale of the internet, but it is a massive machine. It is critical that we consider the energy that is consumed to both run the internet and allow for its exponential growth.

It is estimated that today digital technology uses between 5-9% of global electricity. This estimate is particularly concerning as only a quarter of our electricity comes from renewable resources. There is an increasing demand for electrical infrastructure as fossil fuels transition out of consumer and industrial uses.

There are also carbon implications for building and disposing of digital devices. Electronics are not generally designed for longevity, repair, or recycling. Digital tools consume rare minerals and water, and e-waste is a growing problem.

I will explore these aspects of web sustainability and others in this article. While my focus is on Drupal, these general principles apply to most of the web, particularly open source tools and ways to leverage the work of these communities. Likewise, I will also provide practical steps that people can take to reduce the environmental footprint of their sites.

1. Servers, networks, and power

Most of us do not see the scale of the internet's physical infrastructure. It happens at the other end of a thin fiber optic network. We don't see the thousands of server racks within the huge climate-controlled warehouses that run our websites. We don't see the physical infrastructure behind each hop that our internet packets take as they travel at the speed of light to our laptops and mobile devices.

We generally only consider the devices in our homes and offices—those used daily and those at the back of our closets. Powering our devices typically results in CO2 emissions, but this is just one part of its physical impact on the real world. We need to think more about the effects of building these devices. We should look to extend the lives of our devices and see that the components can be effectively reused or recycled.

2. Loading web pages

Beyond the data centers and servers, we need to consider the costs of just using the web. We know that the median web page weight between 2012 and 2022 has increased by 221%. Many sites now depend on leveraging third-party JavaScript tools, which impacts performance. We've also seen many libraries become bloated with code that isn't actually used to deliver the content.

[ Also read 5 open source tips to reduce waste in web design ]

There have been real advances in modern media formats but slow adoption. Image formats like WebP and AVIF offer dramatic improvements, but we often just use a lower-resolution image instead. SVGs offer extensive enhancements in semantics and scalability but are rarely used to their full extent. The defaults are not set for performance, and people usually default to doing the easiest thing. This impacts customer experience but also has a significant impact on CO2 emissions.

3. Computer usage

Another aspect worth considering is the human effort in using and building our digital tools. A faster web experience allows users to accomplish their tasks and move on quickly. Ensuring a website is performant is important, but it must also be optimized for a good user experience. Content must be available for everyone, regardless of whether they have a disability.

Building digital tools takes time, and complex tools often require more time to develop and maintain. Most modern websites include code that depends on several software libraries and multiple people or teams. Teams using open source software can be more efficient as they are not reinventing the wheel. A great example is the Drupal CMS, which drives over a million websites. Working with Drupal to deliver complex sites allows designers and developers time to focus on meeting customer needs rather than building basic form components. It is hard work to make interfaces simple, and we all benefit when our teams can stand on the shoulders of giants. It is important to remember that our time building digital tools also consumes energy and resources.

Digital teams that care about sustainability need to be conscious of the end use of the technology, not simply the environmental impact of the tool itself.

4. Government website contributions

Based on size alone, digital government initiatives have an outsized role in reducing CO2 contributions. Often, government sites are slow to load and difficult to find or finish the intended task. They are usually built with legacy proprietary tools that do not have a common navigation or reliable search tool. We know that fossil fuels power most US Federal government websites.

The US Web Design System (USWDS) is an initiative to improve the performance and accessibility of government websites. However, more can be done, such as tracking site-wide performance through tools like Lighthouse Parade, which uses Google Lighthouse to evaluate if pages follow best practices.

The UK is leading digital government by highlighting the importance of sustainability and demonstrating how to do this effectively. The UK government has defined a strategic approach to sustainable ICT and provided practical guidance on what government agencies should do.

The UK's Department for Environment, Food & Rural Affairs describes how their work is aligned with the UN's Sustainable Development Goals. The Ministry of Defence also discusses how this fits in with their goals for a circular economy.

5. What about Drupal?

Drupal is just one of many content management systems (CMS). Drupal is open source and drives over a million websites, comprising 1-2% of the web. Drupal can easily manage hundreds of authors and complex permission systems. It is popular with government, education, and large business organizations. It also has been a leader in web accessibility for over a decade. It is also a versatile platform that can leverage a headless JavaScript presentation layer like GatsbyJS or Eleventy.

Drupal can be very performant, but this is often overlooked. Drupal sites leveraging GatsbyJS are usually very fast because it converts the content into static HTML. Drupal can also optimize images for both screen size and bytes. Drupal can even convert images that authors upload to more modern formats like AVIF and WebP. Content Delivery Networks (CDNs) can also help ensure that cached pages are served more quickly. CSS/JS aggregation has been incorporated into Core to improve performance, but many other elements require a site builder to set up before adding content.

Open science and sustainability Video series: ChRIS (ChRIS Research Integration System) Explore Red Hat Research projects 6 articles to inspire open source sustainability How Linux rescues slow computers (and the planet) Latest articles about open science Latest articles about open education Latest articles about sustainability 6. Practical steps for governments to create more sustainable websites

Governments can do a lot to reduce the CO2 footprint of their websites. Using a CDN and leveraging a JavaScript front-end like Gatsby will improve performance. Government design systems like the USWDS should be established by default to share common CSS, JavaScript, and other design assets. Securely sharing optimized digital assets will mean citizens must download fewer files when accessing different sites.

Sites should be created to focus on user tasks. More than any other class of websites, government sites should be designed to be fast and functional. By prioritizing user experience, many steps in online processes can be eliminated, and users can more quickly exchange the required information with government agencies and service providers. Agencies should build on existing design systems to support tools like dark mode effectively. Dark mode is one means to reduce energy consumption by citizens using government sites and extend device battery life.

Having government sites hosted in green data centers powered by renewable energy is a huge plus. Government agencies should aim to support data centers that actively minimize adverse environmental impacts. Supporting companies that work to extend the life of their hardware and effectively reuse or recycle their components is key.

It is also important to effectively manage the back-end infrastructure. Database optimization can dramatically reduce load times, as can Redis or Memcached.

In development sprints, consider using tools like Ecograder,, and the Green Web Foundation to assess where there is room for improvement.

The Sustainable Web Manifesto and the Strategies section of have good resources worth considering. Podcasts like Environment Variables and Green I/O provide a wealth of information and help developers keep up-to-date with best practices.

While these might appear to be small steps, making our web more sustainable can be achieved through collective action and tangible changes. We must prioritize web sustainability and work together to create a more sustainable digital future.

We must remember that even small impacts can have a huge impact when scaled up millions or billions of times.

Let's prioritize web sustainability and work together to create a more sustainable digital future.

Sustainability Web development Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Katie Riker | March 31, 2023 Register or Login to like

I'm a UX designer, and the sustainability of web design is something that is not spoken about nearly enough in our world! With the feeling like computing is cheap and engineering can work miracles, we can sometimes find ourselves falling into a trap of using things like animations as a "this looks cool" rather than a purely usability choice. Awareness of how this additional compute power affects the environment is something we should talk more about. Thank you for sharing!

The open source way of raising a family

Wed, 03/29/2023 - 15:00
The open source way of raising a family rnetser1 Wed, 03/29/2023 - 03:00

As a rebellious teenager in the 80s, "because I said so" was a phrase I heard all too often at home. I wasn't really a rebel. I just wanted to be heard and seen as a person starting to articulate their thoughts and emotions.

The feeling I had of not being heard or listened to led me to believe that it's important to raise kids who are not afraid to speak up, but who can also learn to adapt. Listening to them and collaborating with kids can also help them be creative and, eventually, allow them to be part of a successful organization.

How children interpret information

I didn't intentionally encourage open behaviors for my children, nor did I actively tell my kids to speak up. Nevertheless, on my eldest son's first day of kindergarten, we were called by his teacher. She told us that when she went over the rules of not fighting, pushing, and so on with the class, my kid had raised his hand to tell her that his parents allowed him to dispute and debate all the time.

It was true. I let my boys express themselves and be heard. But my kid's words surprised me. Only then did I realize that while I'd been doing something right, the fact is that how my kids act shouldn't just be a reaction to the way we educate them. Nor should we leave it up to them to pick things from second-hand context. That day was a game changer for me, and practicing an open culture officially and proactively entered our home.

Open for growth

When I was young, parents and elders were the knowledgeable ones.

As a kid, it was not always easy to get information. While I was encouraged to express an opinion, it was only up to the point where it was a normative and "easy to digest" one. Today, parents and teachers and kids all have the same means to access data. What you do with it is entirely up to you.

For example, when playing Scattergories in Hebrew, the word "virus" is normally the only animal starting with "V" that most people play. To gain more points, we decided to find additional animal names that qualified. In no time, we found not one but three new animals (my kid's teacher argued that a "wallaby" (spelled with a "V" in Hebrew) was not a real thing, but that's another story).

I teach my kids to read between the lines and never to accept things presented to them as "facts" without question. This allows them to practice critical thinking. It also allows them to question me, which leads us to open and transparent discussions.

Are these discussions easy? No. Do I always have the energy to conduct them? Absolutely not.

However, to help them practice the learn-to-listen "muscle," these conversations are a must.

Occasionally, we have to force ourselves to find time to focus our attention on our family. It takes time to build a robust and open culture, and as people change so does your family culture. You have to adapt and work to keep it alive. As leader of the pack, I have to provide my kids with a safe place, a place where they can openly share their ideas, a place where they feel belonging.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

In a family, you have to collaborate and solve problems together. Listening to the different ideas and approaches to solving issues allows you to come up with creative (and yet not always to everyone's liking) solutions.

One issue in my home was the noise in the house when playing computer games. This happens mostly late at night and on the weekends. We sat down together and came up with an agreed-upon timeslot for noisy games. From then on, those who wanted to play knew when they could play, and those seeking some quiet time also knew when those times would happen. As kids grow up, the nature of the decisions and discussions change.

Does it mean that all decisions are shared with the kids? No. Does it mean that it's all roses? Absolutely not.

Encouraging kids to connect may end up with a broken vase for which no one seems to be accountable. It may lead to "because I said so" to pop in for a visit. However, having challenging yet inclusive conversations, encouraging innovative thinking, and including kids in decisions are ways of preparing them for adulthood. Hopefully, it'll make them better people, too (so far, this is working well, in my humble opinion.)

Open family culture

Practicing open culture is not a one-time thing. It's a journey, and it's a mindset. I believe it provides both my kids and me the tools to be resilient, open-minded, tolerant, and inquisitive both inside the house and out. Start an open culture with those closest to you, and take it with you everywhere you go.

Start an open culture with those closest to you, and take it with you everywhere you go.

Image by:

Photo by Maksim Romashkin:…

Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why you should use Python and Rust together

Tue, 03/28/2023 - 15:00
Why you should use Python and Rust together Moshe Zadka Tue, 03/28/2023 - 03:00

Python and Rust are very different languages, but they actually go together rather well. But before discussing how to combine Python with Rust, I want to introduce Rust itself. You've likely heard of the language but may not have heard details about how it works.

What is Rust?

Rust is a low-level language. This means that the things the programmers deal with are close to the way computers "really" work.

For example, integer types are defined by bit size and correspond to CPU-supported types. While it is tempting to say that this means a+b in Rust corresponds to one machine instruction, it does not mean quite that!

Rust's compiler's chain is non-trivial. It is useful as a first approximation to treat statements like that as "kind of" true.

Rust is designed for zero-cost abstraction, meaning many of the abstractions available at the language level are compiled away at runtime.

For example, objects are allocated on the stack unless explicitly asked for. The result is that creating a local object in Rust has no runtime cost (though initialization might).

Finally, Rust is a memory-safe language. There are other memory-safe languages and other zero-cost abstraction languages. Usually, those are different languages.

Memory safety does not mean it is impossible to have memory violations in Rust. It does mean that there are only two ways that memory violations can happen:

  • A bug in the compiler.
  • Code that's explicitly declared unsafe.

Rust standard library code has quite a bit of code that is marked unsafe, though less than what many assume. This does not make the statement vacuous though. With the (rare) exception of needing to write unsafe code yourself, memory violations result from the underlying infrastructure.

Why does Rust exist?

Why did people create Rust? What problem was not addressed by existing languages?

Rust was designed as a language to achieve a combination of high-performance code that is memory safe. This concern is increasingly important in a networked world.

The quintessential use case for Rust is low-level parsing of protocols. The data to be parsed often comes from untrusted sources and may need to be parsed in a performant way.

If this sounds like what a web browser does, it is no coincidence. Rust originated from the Mozilla Foundation as a way to improve the Firefox browser.

In the modern world, browsers are no longer the only things on which there is pressure to be safe and fast. Even the common microservice architecture, combined with defense-in-depth principles, must be able to unpack untrusted data quickly.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Counting characters

To understand a "wrapping in Rust" example, there needs to be a problem to solve. Not just any problem will do. The issue needs to be:

  • Easy enough to solve
  • Be helped by the ability to write high-performance loops
  • Somewhat realistic

The toy problem in this case is whether a character appears more than X times in a string. This is something that's not easily amenable to performant regular expressions. Even dedicated Numpy code can be slower than necessary because often there is no need to scan the entire string.

You can imagine some combination of Python libraries and tricks that make this possible. However, the obvious algorithm is pretty fast if implemented in a low-level language and makes things more readable.

A twist is added to make the problem slightly more interesting and demonstrate some fun parts of Rust. The algorithm supports resetting the count on a newline (does the character appear more than X times in a line?) or on a space (does the character appear more than X times in a word?).

This is the only nod given to "realism." Any more realism will make the example not useful pedagogically.

Enum support

Rust supports enumeration (enums). You can do many interesting things with enums.

For now, a three-way enum without any other twirls is used. The enum encodes what character resets the count.

#[derive(Copy)] enum Reset { NewlinesReset, SpacesReset, NoReset, }Struct support

The next Rust component is a bit more substantial: a struct. A Rust struct is somewhat close to a Python dataclass. Again, you can do more sophisticated things with a struct.

#[pyclass] struct Counter { what: char, min_number: u64, reset: Reset, }Implementation blocks

You add a method to a struct in a separate block in Rust: the impl block. The details are outside the scope of this article.

In this example, the method calls an external function. This is mostly done to break up the code. A more sophisticated use would instruct the Rust compiler to inline the function to allow readability without any runtime cost.

#[pymethods] impl Counter { #[new] fn new(what: char, min_number: u64, reset: Reset) -> Self { Counter{what: what, min_number: min_number, reset: reset} } fn has_count( &self, data: &str, ) -> bool { has_count(self, data.chars()) } } Function

By default, Rust variables are constant. Because the current count has to change, it is declared as a mutable variable.

fn has_count(cntr: &Counter, chars: std::str::Chars) -> bool { let mut current_count : u64 = 0; for c in chars { if got_count(cntr, c, &mut current_count) { return true; } } false }

The loop goes over the characters and calls the function got_count. Again, this is done to break the code into slides. It does show how to send a mutable borrow reference to a function.

Even though current_count is mutable, both the sending and receiving sites explicitly mark the reference as mutable. This makes it clear which functions might modify a value.


The got_count resets the counter, increments it, and then checks it. Rust's colon-separated sequence of expressions evaluates to the result of the last expression, in this case, whether the threshold was met.

fn got_count(cntr: &Counter, c: char, current_count: &mut u64) -> bool { maybe_reset(cntr, c, current_count); maybe_incr(cntr, c, current_count); *current_count >= cntr.min_number }Reset code

The reset code shows another useful thing in Rust: matching. A complete description of the matching abilities in Rust would be a semester-level class, not two minutes in an unrelated talk, but this example matches on a tuple matching one of two options.

fn maybe_reset(cntr: &Counter, c: char, current_count: &mut u64) -> () { match (c, cntr.reset) { ('\n', Reset::NewlinesReset) | (' ', Reset::SpacesReset)=> { *current_count = 0; } _ => {} }; }Increment support

The increment compares the character to the desired one and, if matched, increments the count.

fn maybe_incr(cntr: &Counter, c: char, current_count: &mut u64) -> (){ if c == cntr.what { *current_count += 1; }; }

Note that I optimized the code in this article for slides. It is not necessarily a best-practice example of Rust code or how to design a good API.

Wrap Rust code for Python

To wrap Rust code for Python, you can use PyO3. The PyO3 Rust "crate" (or library) allows inline hints for wrapping Rust code into Python, making it easier to modify both together.

Include PyO3 crate primitives

First, you must include the PyO3 crate primitives.

use pyo3::prelude::*;Wrap enum

The enum needs to be wrapped. The derive clauses are necessary for wrapping the enum for PyO3, because they allow the class to be copied and cloned, making them easier to use from Python.

#[pyclass] #[derive(Clone)] #[derive(Copy)] enum Reset { /* ... */ }Wrap struct

The struct is similarly wrapped. These call "macros" in Rust, which generate the needed interface bits.

#[pyclass] struct Counter { /* ... */ }Wrap impl

Wrapping the impl is more interesting. Another macro is added called new. This method is marked as #[new], letting PyO3 know how to expose a constructor for the built-in object.

#[pymethods] impl Counter { #[new] fn new(what: char, min_number: u64, reset: Reset) -> Self { Counter{what: what, min_number: min_number, reset: reset} } /* ... */ }Define module

Finally, define a function that initializes the module. This function has a specific signature, must be named the same as the module, and decorated with #[pymodule].

#[pymodule] fn counter(_py: Python, m: &PyModule ) -> PyResult<()> { m.add_class::()?; m.add_class::()?; Ok(()) }

The ? shows that this function can fail (for example, if the class was not appropriately configured). The PyResult is translated into a Python exception at import time.

Maturin develop

For quick checking, maturin develop builds and installs the library into the current virtual environment. This helps iterate quickly.

$ maturin developMaturin build

The maturin build command builds a manylinux wheel, which can be uploaded to PyPI. The wheel is specific to the CPU architecture.

Python library

Using the library from Python is the nice part. Nothing indicates a difference between this and writing the code in Python. One useful aspect of this is that if you optimize an existing library in Python that already has unit tests, you can use the Python unit tests for the Rust library.


Whether you used maturin develop or pip install to install it, importing the library is done with import.

import counterConstruct

The constructor was defined exactly so the object could be built from Python. This is not always the case. Sometimes objects are only returned from more sophisticated functions.

cntr = counter.Counter( 'c', 3, counter.Reset.NewlinesReset, )Call

The final pay-off is here at last. Check whether this string has at least three "c" characters:

>>> cntr.has_count("hello-c-c-c-goodbye") True

Adding a newline causes the rest to happen, and there aren't three "c" characters without an intervening newline:

>>> cntr.has_count("hello-c-c-\nc-goodbye") FalseUsing Rust and Python is easy

My goal is to convince you that combining Rust and Python is easy. I wrote little code to "glue" them. Rust and Python have complementary strengths and weaknesses.

Rust is great for high-performance, safe code. Rust has a steep learning curve and can be awkward for quickly prototyping a solution.

Python is easy to get started with and supports incredibly tight iteration loops. Python does have a "speed cap." Beyond a certain level it is harder to get better performance from Python.

Combining them is perfect. Prototype in Python and move performance bottlenecks to Rust.

With maturin, your development and deployment pipelines are easier to make. Develop, build, and enjoy the combo!

Rust and Python have complementary strengths and weaknesses. Prototype in Python and move performance bottlenecks to Rust.

Image by:

Python Rust Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use this open source accounting app to run your small business

Mon, 03/27/2023 - 15:00
Use this open source accounting app to run your small business Don Watkins Mon, 03/27/2023 - 03:00

GnuCash is a powerful and flexible accounting tool that can be used for small business invoicing and accounting. It has a number of features that make it particularly well-suited for this purpose, including the ability to track expenses and income, generate reports, and manage invoices. Additionally, GnuCash is free and open source, which makes it accessible to small businesses with limited resources. In this article, I discuss the features of GnuCash that make it easy for you to get started using it in your own small business.

I began using GnuCash a number of years ago for my personal finances but found it could also function as a useful tool for my small business too. I'd been using a proprietary solution for much of the life of my business. I grew tired of being forced to upgrade periodically to get access to my invoices and statements for my small business. Moving to GnuCash gave me the ability to integrate my small business accounting with my personal finances without sacrificing any features.

Install GnuCash on Linux

You can install GnuCash from your software repository. On Debian, Elementary, and similar:

$ sudo apt install gnucash

On Fedora, CentOS, Mageia, and similar:

$ sudo dnf install gnucashGnuCash for business

GnuCash comes with an account setup wizard that can help you build a common business account configuration. To access it:

  1. Start GnuCash.
  2. Click on the File menu and select New File.

Follow the GnuCash Assistant that appears on screen to create your new business account file.

The onscreen instructions guides you through the process of setting up your business. Click on Next in the top right corner of the Assistant window. You're prompted to enter a company name, address, contact information, and a company ID of your own choosing. You must also choose a default tax table and a date format.

The next screen prompts you to choose the currency, and there are a large number of currencies supported.

Then you're prompted to choose the accounts you want to create. Select the option to create Business Accounts. You can always customize the list of accounts, and GnuCash provides copious documentation to help you better customize it to your individual needs.

Complete the assistant, then click Apply in the top right-hand corner of the GnuCash Assistant window.

Adding customers

The top menu of GnuCash has a menu item labeled Business. The first item on that menu is Customers, followed by Customers Overview. This is where you can view a list of all your customers.

The next item is New Customer. This is where you enter new customers. The dialog box provides a place for customer information, including billing information, shipping address, email address, telephone number, and more.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Create an invoice

After adding a customer, you can begin the process of creating invoices. Click on the Business menu, select Customer, and then New Invoice.

Payment processing is easy too. This is located in the Business menu. Select Customer, and then Process Payment.

You're in business

The Business menu also includes options for entering vendors and employees, should your business require that information. There's a menu item for sales tax and many other options to ensure you're compliant with local expectations.

With GnuCash, your data isn't stored in a proprietary format, so you can migrate to any other platform in the future if you need to. Open standards for data storage, especially when that data is a legal requirement, are important and allow you to have full possession of your business history. Using GnuCash puts you in control of your small business.

Keep track of customers and invoices with GnuCash.

Image by:

Alternatives Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create a ChatBot in Mattermost with Python

Mon, 03/27/2023 - 15:00
Create a ChatBot in Mattermost with Python DrMJG Mon, 03/27/2023 - 03:00

ChatOps is a collaboration model that connects people, processes, tools, and automation into a transparent workflow. Mattermost is an open source, self-hosted messaging platform that enables organizations to communicate securely, effectively, and efficiently. It's a great open source alternative to Slack, Discord, and other proprietary messaging platforms. This article outlines the steps to create a ChatOps bot on Mattermost, including the necessary code examples and explanations.


Before starting, ensure that you have access to a Mattermost server, have Python installed, and have installed the Mattermost Python driver using pip.

Create a bot account on Mattermost

To create a bot account, access the Mattermost System Console, and add a bot account with appropriate access permissions. Retrieve the bot's username and password for use in the Python script.

Set up the Mattermost Python driver

Install the Mattermost Python driver using pip, and import it in the Python script. Create a new driver instance and log in to the Mattermost server.

Create the ChatOps bot in Python

Create a new Python script, define the necessary libraries to be imported, and implement the bot's functionality using the Mattermost driver's API. Write code to handle messages, commands, and other events, and use the Mattermost driver's API methods to send messages and notifications to channels and users. Finally, debug and test the ChatOps bot.

Example of ChatOps bot code

Here is an example Python code for a simple ChatOps bot that responds to a user's message:

from mattermostdriver import Driver bot_username = 'bot_username' bot_password = 'bot_password' server_url = 'https://your.mattermost.server.url' def main(): driver = Driver({'url': server_url, 'login_id': bot_username, 'password': bot_password, 'scheme': 'https'}) driver.login() team = driver.teams.get_team_by_name('team_name') channel = driver.channels.get_channel_by_name(team['id'], 'channel_name') @driver.on('message') def handle_message(post, **kwargs): if post['message'] == 'hello': driver.posts.create_post({ 'channel_id': post['channel_id'], 'message': 'Hi there!' }) driver.init_websocket() if __name__ == '__main__': main()

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles Add features

Once you've created a basic ChatOps bot on Mattermost, you can add more features to extend its functionality. Here are the steps:

  1. Determine the feature you want to add: Before writing the code, you must determine the feature to add to your ChatOps bot. This can be anything from sending notifications to integrating with third-party tools.

  2. Write the code: Once you have determined the feature you want to add, you can start writing the code. The code will depend on the feature you add, but you can use the Mattermost Python driver to interact with the Mattermost API and implement the feature.

  3. Test the code: After writing the code, it's important to test it to ensure that it works as expected. Before deploying it to your production server, you can test the code on a development server or in a test channel.

  4. Deploy the code: Once you have tested it and it works as expected, you can deploy it to your production server. Follow your organization's deployment process and ensure the new code doesn't break any existing functionality.

  5. Document the new feature: It's important to document the new feature you have added to your ChatOps bot. This will make it easier for other team members to use the bot and understand its capabilities.

One example of a ChatOps Bot feature could be integrating with a third-party tool and providing status updates on certain tasks.

from mattermostdriver import Driver import requests bot_username = 'bot_username' bot_password = 'bot_password' server_url = 'https://your.mattermost.server.url' def main(): driver = Driver({'url': server_url, 'login_id': bot_username, 'password': bot_password, 'scheme': 'https'}) driver.login() team = driver.teams.get_team_by_name('team_name') channel = driver.channels.get_channel_by_name(team['id'], 'channel_name') @driver.on('message') def handle_message(post, **kwargs): if post['message'] == 'status': # Make a request to the third-party tool API to get the status response = requests.get('') if response.status_code == 200: status = response.json()['status'] driver.posts.create_post({ 'channel_id': post['channel_id'], 'message': f'The status is {status}' }) else: driver.posts.create_post({ 'channel_id': post['channel_id'], 'message': 'Failed to get status' }) driver.init_websocket() if __name__ == '__main__': main()

In this example, the ChatOps bot listens for the command "status" and makes a request to a third-party tool API to get the current status. It then posts the status update in the Mattermost channel where the command was issued. This allows team members to quickly get updates on the status of the task without having to leave the chat platform.

Open source ChatOps

In summary, creating a ChatOps bot on Mattermost is a simple process that can bring numerous benefits to your organization's communication and workflow. This article has provided a step-by-step breakdown and code examples to help you get started on creating your bot and even customize it by adding new features. Now that you know the basics, you can further explore ChatOps and Mattermost to optimize your team's collaboration and productivity.

Implement ChatOps in your organization with a simple open source bot.

Python Tools Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source tools for mind mapping

Fri, 03/24/2023 - 15:00
Open source tools for mind mapping Amrita42 Fri, 03/24/2023 - 03:00

In today's world and social media, many people don't have the patience to read lengthy textual content. Visuals are a great way to capture your audience's attention span.

Did you know that research at 3M Corporation concluded that visuals are processed 60,000 times faster than text? Visuals are more impactful than words and enhance creative thinking and memory.

A picture is worth a thousand words

I looked at some of the common Git commands. I used Git commands as the main topic; each sub-topic is a Git command syntax with a definition. For this, I used Wisemapping.

Image by:

(Amrita Sakthivel, CC BY-SA 4.0)

Whether you knew what a mind map was before or not, now that you've seen a mind map, you can now understand the concept.. That's the power of visuals.

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles How do you create a mind map?
  1. Start with the main topic and place it in the middle of your drawing board.
  2. Create sub-topics and link them to the main topic.
  3. You can add details to each sub-topic, such as definitions, examples, etc.
3 open source tools you can use to create a mind map

Take a look at these three open source tools to create a visual of your idea:

  1. Wisemapping
  2. Freeplane
  3. Semantik

Wikipedia defines a mind map as a diagram to visually organize information into a hierarchy, showing relationships among pieces of the whole. Mind mapping starts with a central theme and then builds relations. It is a visual way to structure thoughts and create impactful presentations.

You can use mind maps in your work. For example, I used a mind map to show a high-level overview of features planned for a project. With these excellent open source mind mapping applications, it's easy to get started visualizing your next project. Try mapping your mind with open source.

Use mind maps with open source tools to make an impactful presentation.

Image by:

Tools Art and design What to read next How I sketchnote with open source tools This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Hamed Khosravi | March 24, 2023 Register or Login to like

it's very helpful, thanks.

3 surprising things Linux sysadmins can do with systemd

Thu, 03/23/2023 - 15:00
3 surprising things Linux sysadmins can do with systemd alansmithee Thu, 03/23/2023 - 03:00

When it first started out, there was a lot of press about systemd and its ability to speed up boot time. That feature had a mostly-universal appeal (it's less important to those who don't reboot), so in many ways, that's the reputation it still has today. And while it's true that systemd is the thing that launches services in parallel during startup, there's a lot more to it than that. Here are three things you may not have realized systemd could do but should be taking advantage. Get more tips from our new downloadable eBook, A pragmatic guide to systemd.

1. Simplify Linux ps

If you've ever used the ps or even just the top command, then you know that your computer is running hundreds of processes at any given moment. Sometimes, that's exactly the kind of information you need in order to understand what your computer, or its users, are up to. Other times, all you really need is a general overview.

The systemd-cgtop command provides a simple view of your computer's load based on the cgroups (control groups) tasks have been arranged into. Control groups are important to modern Linux, and are essentially the support structures underneath containers and Kubernetes (which in turn are why the cloud scales the way it does), but also they're useful constructs on your home PC. For instance, from the output of systemd-cgtop, you can see the load of your user processes as opposed to system processes:

Control Group Proc+ %CPU Memory Input/s Output/s / 183 5.0 1.6G 0B 3.0M user.slice 4 2.8 1.1G 0B 174.7K user.slice/user-1000.slice 4 2.8 968.2M 0B 174.7K system.slice 65 2.2 1.5G 0B 2.8M

You can also view just your userspace processes, or just your userspace processes and kernel threads.

This isn't a replacement for top or ps by any means, but it's an additional view into your system from a different and unique angle. And it can be vital when running containers, because containers use cgroups.

2. Linux cron

Cron is a classic component of Linux. When you want to schedule something to happen on a regular basis, you use cron. It's reliable and pretty well integrated into your system.

The problem is, cron doesn't understand that some computers get shut down. If you have a cronjob scheduled for midnight, but you turn your computer off at 23:59 every day, then your cronjob never runs. There's no facility for cron to detect that there was a missed job overnight.

As an answer to that problem, there's the excellent anacron, but that's not quite as integrated as cron. There's a lot of setup you have to do to get anacron running.

A second alternative is systemd timers. Like cron, it's already built in and ready to go. You have to write a unit file, which is definitely more lines than a one-line crontab entry, but it's also pretty simple. For instance, here's a unit file to run an imaginary backup script 30 minutes after startup, but only once a day. This ensures that my computer gets backed up, and prevents it from trying to backup more than once daily.

[Unit] Description=Backup Requires=myBackup.service [Timer] OnBootSec=30min OnUnitActiveSec=1d [Install]

You can, of course, intervene and prompt a job to run with . Thanks to the OnUnitActiveSec directive, systemd doesn't attempt to run a job you've manually activated.

Linux Containers What are Linux containers? What is Kubernetes? Free online course: Deploy containerized applications eBook: A guide to Kubernetes for SREs and sysadmins Free online course: Running containers with Red Hat technical overview Podman cheat sheet The latest articles on Linux containers 3. Run Linux containers

Containers make starting up a complex service really easy. You can run a Mattermost or Discourse server in mere minutes. The hard part, in some cases, is managing and monitoring the containers once you have them running. Podman makes it easy to manage them, but what do use to manage Podman? Well, you can use systemd.

Podman has a built-in command to generate unit files so your containers can be managed and monitored by systemd:

$ podman generate systemd --new --files --name example_pod

All you have to do then is start the service:

$ systemctl --user start pod-example_pod.service

As with any other service on your computer, systemd ensures that your pod runs no matter what. It logs problems, which you can view with journalctl along with your other essential logs, and you can monitor its activity within cgroups using systemd-cgtop.

It's no Kubernetes platform, but for one or two containers that you just want to have available on a reliable and predictable basis, Podman and systemd are an amazing pair.

Download the systemd eBook

There's a lot more to systemd, and you can learn the basics, along with lots of useful and pragmatic tips, from author David Both in his new complimentary pragmatic guide to systemd.

It's not just for making your computer boot faster. Download our new systemd eBook for Linux sysadmins for more tips.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Sysadmin Linux Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to encourage positive online communication in your open source community

Thu, 03/23/2023 - 15:00
How to encourage positive online communication in your open source community ultimike Thu, 03/23/2023 - 03:00

Threaded online conversations are a relatively new form of communication that can improve knowledge transfer and availability, but they can also stray from the original intent. Online technical conversations in open source communities using Slack or one of the several open source alternatives experience these benefits and drawbacks.

Say a community member posts a question or shares an idea to start a conversation. As in any conversation, sometimes things can get off track. While not all diversions from the prompt are unhelpful, there are times when a comment can be unproductive—and sometimes even hurtful.

The Drupal community is like most other open source communities, in that we have many online conversations happening at any given time, in a variety of places. Sometimes, when a community member flags an online comment as hurtful, the Drupal Community Working Group (CWG) is asked to step in and mediate the situation. The CWG is responsible for maintaining the health of the community. Often, the solution is as simple as reminding the author of the comment of the Code of Conduct.

In 2020, the CWG began looking into how they could crowdsource this activity in a way that would be predictable and non-confrontational. The group decided to author several nudges: prewritten, formatted responses that community members could copy and paste into an online conversation to get conversations back on track.

The Drupal community currently has five different nudges depending on the situation. It is up to community members to select one from this list:

  • Inclusive language, gendered terms
  • Inclusive language, ableist terms
  • Gatekeeping knowledge
  • Cultural differences
  • Escalating emotions

For example, the inclusive language, ableist terms nudge contains this message:

This discussion appears to include the use of ableist language in a comment. Ableist language can be harmful to our community because it can devalue challenges experienced by people with disabilities.

For more information, please refer to Drupal’s Values and Principles about treating each other with dignity and respect.

This comment is provided as a service (currently being tested) of the Drupal Community Health Team as part of a project to encourage all participants to engage in positive discourse. For more information, please visit

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

Currently, using one of the nudges is a manual copy-paste process, but the group is discussing the possibility of providing tools for easier use. We provide both formatted (for forum and issue queues) and unformatted (Slack) versions of each nudge. The CWG is also working on adding a sixth nudge for unhelpful or inauthentic comments. This nudge is aimed at discouraging users who add comments to a thread solely to gain a contribution credit on the issue.

Over the past two years that nudges have been available, the CWG has not fielded any complaints related to their use. While the number of conflicts between community members escalated to the CWG has declined during this period, it is difficult to attribute this solely to nudges. Other efforts have been made to improve community health (not to mention outside factors). Nevertheless, the CWG feels that nudges have been a net positive to the community and continues to access, improve, and encourage their use. In a blog post to the community announcing their general availability, the CWG wrote:

To continue to grow a healthy community, we all must work under the assumption that no one intentionally uses language to hurt others. Even so, despite our best efforts we sometimes still use words or phrases that are discouraging, harmful, or offensive to others. We are all human beings who make mistakes, but as members of a shared community, it's our responsibility to lift each other up and encourage the best in each other.

Prewritten nudges for various situations are useful prompts for members of any community to keep conversations productive and encouraging—and do so in a friendly way!

The Drupal community uses nudges to keep conversations productive and inclusive.

Drupal Community management Diversity and inclusion What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 203 points Ipswich, UK

Ruth has been a keen advocate of Open Source for over 18 years.

As a contributor to the Joomla! and Mautic community, she volunteered on the Joomla! Community Leadership Team for over three years, and currently works as Project Lead for Mautic, the world's first open source marketing automation platform, at Acquia.

Ruth is a keen runner, and lives with a condition called Ehlers-Danlos Syndrome which means that she sometimes needs to use a wheelchair or walking aids.

| Follow RCheesley Open Minded Author Linux Community Manager Geek Joomla Contributor Club 931 points San Francisco Bay Area

AmyJune is an experienced community manager, mentor, public speaker, and inclusion advocate. While her roots are in Drupal, she also contributes regularly to the Linux and Accessibility communities. With a dual focus on both open-source community development and inclusivity, she is uniquely positioned to help individuals become more comfortable and confident as they contribute to their communities.

AmyJune lives adjacent to the San Francisco Bay Area in the rural agricultural hub of San Benito County. Having two grown children who survived their teenage years, and being the youngest of 5 sisters, AmyJune is either very lucky to have made it this far in life or is very talented at talking herself out of deadly situations.

Outside of her mission in the open source community space, she has a deep love for mycology, geocaching, and air-cooled Volkswagens.

| Follow volkswagenchick | Connect volkswagenchick User Attributes Team Open Source Evangelist Author Contributor Club Register or Login to post a comment.

8 steps to refurbish an old computer with Linux

Wed, 03/22/2023 - 15:00
8 steps to refurbish an old computer with Linux howtech Wed, 03/22/2023 - 03:00

We live in a remarkable era. It wasn't so long ago we were all chained to the "upgrade treadmill," forced to buy expensive new personal computers every few years.

Today, with the benefit of open source software, you can break out of that cycle. One way is to refurbish old computers and keep them in service. This article tells you how.

1. Grab an old PC

Maybe you have an old computer lying unused in the basement or garage. Why not put it to use?

Or you can get an old machine from a friend, family member, or Craigslist ad. Many electronics recycling centers will let you poke around and take a discarded machine if it fits your fancy. Be sure to grab more than one if you can, as you may need parts from a couple abandoned PCs to build one good one.

Look at the stickers on the front of the machines to make sure you're selecting good refurbishing candidates. Items with Window 7 and 8 logos run Linux quite well. Extended support ended for 8.1 this January, so I'm seeing a lot of those getting dumped.

Many of these Windows computers offer perfectly good hardware. They're only being trashed due to planned obsolescence because they can't run Windows 11. They run open source software just fine.

2. Identify and clean everything

Before you open up your "new" machine to see what you've got, be sure to ground yourself by touching something metal. Even a shock so slight you don't feel it can destroy delicate circuitry.

You'll instantly see if any parts are missing. Many people take out their disks or sometimes the memory before recycling a computer. You'll either have to acquire more than a single box to cover this, or you'll need to buy a part or two to make it whole.

Before proceeding further, it's important to give the machine a thorough cleaning. Pay special attention to the CPU complex, the fans, and all surfaces. Remember that you can't rub electronics without risking damage, so use compressed air for cleaning.

3. Ensure all hardware works

You'll want to verify that all hardware works prior to installing any software. Don't skimp on the testing! It's a huge waste of your time if you find out, for example, that your computer has a transient memory error at a later time because you ran only a short ram test before going to next steps. I find it convenient to run time-consuming tests overnight.

Most computers have hardware-specific diagnostics built in. You usually access these either through the boot-time UEFI/BIOS panels or by pressing a PF key while booting. If your machine doesn't include testing tools, try Ultimate Boot Disk, which provides tons of useful testing utilities.

Be sure you test all components thoroughly:

  1. Memory
  2. Disk
  3. CPU and Motherboard
  4. Peripherals (USB ports, sound, microphone, keyboard, display, fans, etc)

If you find problems, download my free Quick Guide to Fixing Hardware. That plus some searching online enables you to fix just about anything.

4. Prepare the disk

You've assessed your hardware and have gotten it into good working order. If your computer came with a hard disk drive (HDD), the next step is to ready that for use.

You need to completely wipe the disk because it could contain illegally obtained movies, music, or software. To thoroughly wipe an HDD, run a tool like DBAN. After running that, you can rest assured the disk is completely clean.

If you have a solid state disk (SSD), the situation is a bit trickier. Disk-wipe programs designed to cleanse hard disks don't work with SSDs. You need a specialized secure erase program for an SSD.

Some computers come with an secure erase utility in their UEFI/BIOS. All you have to do is access the boot configuration panels to run it.

The other option is the website of the disk manufacturer. Many offer free downloads for secure erase utilities for their SSDs.

Unfortunately, some vendors don't provide a secure erase utility for some of their consumer drives, while others supply only a Windows executable. For an SSD, Parted Magic's secure erase function is the best option.

5. Booting, data storage, and backups

Your disk strategy for your refurbished computer must address three needs: booting, data storage, and backups.

A few years ago, if your refurbishing candidate contained a disk, it was always a hard drive. You'd wipe it with DBAN, then install your favorite Linux distribution, and use it as both your boot and storage device. Problem solved.

Today's technology offers better options. These eliminate the slow hard disk access that was previously one of the downsides of using older equipment.

One option is to buy one of the new low-end SSDs that have become available. These now offer the SATA and external USB interfaces that work with mature computers.

Prices have plummeted. I recently bought a 480 gig SSD/SATA drive for $25. That's so inexpensive that, even if your old computer came with a hard drive included, you might prefer to buy a new SSD anyway. It boots and accesses data so much faster.

The lightweight 2.5" SSDs also solve the mounting dilemmas one sometimes faced with old desktops. With a single screw you can attach them almost anywhere. No more messing with rails, cages, and all the other goofy proprietary parts companies used to mount their heavy 3.5" hard drives.

An alternative to an SSD is to boot off a USB memory stick. Thumb drives now offer enough space to host any operating system you prefer, while leaving some storage space for your data. Beyond speed, you gain flexibility by keeping your system on a portable device.

So consider installing your operating system to a fast SSD or USB and booting and running it from that.

What about other drives? I like to use any hard drive that came with the computer as a backup disk for my boot SSD. Or employ it as mass storage.

I usually remove the optical drives you find in old desktops. Since USB sticks are faster and hold more data, few people use them anymore. Most now stream their films, music, and software programs instead of collecting them on optical media.

Removing the optical drive frees up an extra set of disk connectors. It also opens up lots of space in the cabinet and improves air flow. This can make a big difference if you're dealing with small footprint desktops with slimline or mini-tower cases.

Finally, take a few minutes to decide on your backup strategy. You'll need to back up two separate things: your data and the operating system.

Will you back up to a second drive inside the PC, a detachable storage device, or cloud services? Your decision helps determine whether you'll need a second disk in your refurbished computer.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 6. Select and install software

Different people have different needs that drive their software selection. Here are some general guidelines.

If your computer has an Intel i-series processor and at least 4 GB of memory, it can comfortably run nearly any Linux distribution with any desktop environment (DE).

With between two and four gigabytes of memory, install a Linux with a lightweight interface. This is because high-end display graphics is a big consumer of memory resources. I've found that Linux distros with a DE like XFCE, LXDE, and LXQt work well.

If you only have a gigabyte of memory, go for an "ultra-light" Linux distribution. This should probably also be your choice if you have an old dual-core CPU or equivalent.

I've used both Puppy Linux and AntiX with great results on such minimal hardware. Both employ lightweight windows managers for their user interface instead of full desktop environments. And both come bundled with apps selected specifically to minimize resource use.

7. Browse the web efficiently

Web pages have grown dramatically in the past five years. Over half the computer resource many popular websites require is now consumed by advertisements and trackers. So when web surfing, block all those ads and trackers. If you can off-load ad blocking from your browser to your VPN, that's ideal. And don't let those auto-run videos run without your explicit permission.

Look around to see what browser works best for your equipment. Some are designed with a multi-threading philosophy, which is great if your PC can support it. Others try to minimize overall resource usage. Many people aren't aware that there are quite a few capable yet minimalist Linux browsers available. In the end, pick the browser that best matches both your equipment and your web surfing style.

8. Have fun

Whether you want to make use of an old computer sitting in your basement, help the environment by extending the computer life cycle, or just find a free computer, refurbishing is a worthy goal.

Anyone can succeed at this. Beyond investing your time, the cost is minimal. You're sure to learn a bit while having fun along the way. Please share your own refurbishing tips with everyone in the comments section.

A step-by-step guide to refurbishing an old computer to keep it in service.

Image by:

Linux Hardware Sustainability What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why your open source project needs a content strategy

Wed, 03/22/2023 - 15:00
Why your open source project needs a content strategy emilyo Wed, 03/22/2023 - 03:00

If you search for content strategy in your favorite search engine, I bet that you find that it is a term more strongly associated with marketing content than with technical content. However, a technical content strategy is a powerful way to align stakeholders around content goals for your open source project. In this article, I explore the benefits of technical content strategy and how having one can improve the user and contributor experience of your community projects.

When developing a content strategy, you should consider your goals. The goals differ depending on the user. For the marketing team, the goal of content strategy is to attract and connect with existing and potential customers by using content. Marketing content strategists aim to engage customers and develop relationships with the brand.

The goal of technical content strategists is to guide users with technical content that helps them achieve their goals. It should provide them with just enough information to successfully complete their task.

Creating a content strategy

So how do you create a content strategy that helps you achieve your goal? You can do this by having someone on your project take the role of content strategist. Their task is to document what user content is created, where it is published, how users can find it, and how it can be maintained, published, and retired. The content strategy should be available where contributors can find it easily.

Content types and publication locations

The first step to creating content is to get to know the project's audience. Identifying users is best done with all project stakeholders contributing, so there is a shared understanding of who the users are and what their goals are. A tip for open source content strategies is to consider your contributor personas as well as your end-user consumer personas.

A good content strategy is grounded in meeting the user's needs. The project's content should not tell users everything the content creator knows about something. The content should tell the user just enough to complete a task. When the personas are identified and documented, the strategist considers what types of content help these personas be successful. For example, can the user needs be met completely with microcopy in the user interface, or do they need more detailed documentation? Is the contributor onboarding workflow best demonstrated in a video or a blog with screenshots?

While considering what content types to create, the strategist also looks at where the content should be published so your personas can easily find it. The strategist needs to consider how content creators should progressively disclose information if it is not possible to keep the user in their context. For example, if the user is struggling to understand a log file, you can link them to more information on the project's documentation website.

The strategy should give guidance to help decisions about what types of content might best solve the user's problem. The content creator should be challenged to ask themselves what content type best meets the user's needs in the moment. Do they need a new documentation article on the website? Could the user friction point be avoided with a clear error or log message, a better UI label, or other content type? You should make clear that sometimes the answer to a problem isn't always to create more content.

Content reviews and retirement

Now that you have a strategy for what types of content you want and where to publish them, you need to consider governance. The first part of this process is to decide what types of reviews your content requires before publishing. For example, does it require a content plan review, subject matter expert review, editorial review, peer author reviews, or copy reviews. You should also decide how reviews and approvals are tracked.

The second aspect of governance is to decide on a schedule for retirement or archival of content. The strategist should document how content is reviewed for retirement in the future. You should decide if content needs to be retired annually or before every new version release. You should also consider if the content needs to be accessible in some format for users using older versions.

If you are creating a content strategy for an existing project, the chances are high that your project already has some content. As part of the creation process, the content strategist should audit this content, and consider if it is still current and useful. If it is out of date, it should be retired or archived.

A content strategy is beneficial for everyone

Now that you have a content strategy for your project, you should see how it benefits your users, contributors, and your project as a whole.

Project end users

At the heart of the content strategy is the audience. The strategy is centered on the personas interacting with the project. It considers how you can provide them with easily findable information in a consumable format that helps them complete their goals. End users benefit from a content experience that is built around their needs. It should also be self-service so they can solve problems independently.


Content consumers, just like end users, benefit from self-service content. New contributors to the project benefit from content designed to onboard them to the project quickly and with ease. The experienced contributor persona gets content that helps them learn about new features of the project. They can also get help with more technically challenging areas. Contributor personas benefit from having accessible reference information. This information can describe the interfaces and features that are available to them to use, build on, and use to interact with the product or service.

The contributors to your project are also the people creating the content that your users consume. Content strategy can help them to understand and feel empathy for user personas, their goals, and use cases. Giving contributors a common understanding of the user's content needs and the types of content that satisfies them supports the creation of a consistent content experience.

Creating a strategy helps all content creators easily understand and align with the content vision. It keeps them focused on creating high-value content that reduces user friction.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Project

In an ideal world, your project would have all the resources needed to create the ideal content experience for your users as envisioned in your strategy. Unfortunately, we live in the real world with conflicting priorities and resource-constrained projects. The good news is that a user-centered content strategy gives the team a shared vision of the content experience. This strategy helps build a content foundation that the project can iterate with each release. It also helps the team make more informed decisions about content.

Your project also benefits from accessible documentation that better serves your users. Your content experience helps users recognize and realize the value of what you have created.

Implement a content strategy

Your content strategy should be a living artifact, guiding content decisions for the project. With this in mind, it should be revisited frequently and tweaked to reflect what is working or not working for your users. Keeping it current enhances your content experience and improves its effectiveness in guiding your users to success.

I believe that the practice of content strategy should be more widely adopted in the technical world as it is a powerful tool. It can help you create a better experience for all of your users. The experience should consider each user's needs, workflow, pain points, and emotions. This helps projects deliver the right content in the right place at the right time.

Explore the benefits of technical content strategy and how having one can improve the user and contributor experience of your open source community projects.

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A 5-minute tour of the Fediverse

Tue, 03/21/2023 - 15:00
A 5-minute tour of the Fediverse murph Tue, 03/21/2023 - 03:00

People want to communicate over the internet as easily as they do in real life, with similar protections but, potentially, farther reach. In other words, people want to be able to chat with a group of other people who aren't physically in the same location, and still maintain some control over who claims ownership of the conversation. In today's world, of course, a lot of companies have a lot to say about who owns the data you send back and forth over the world wide web. Most companies seem to feel they have the right to govern the way you communicate, how many people your message reaches, and so on. Open source, luckily, doesn't need to own your social life, and so appropriately it's open source developers who are delivering a social network that belongs, first and foremost, to you.

The "Fediverse" (a portmanteau of "federated" and "universe") is a collection of protocols, servers, and users. Together, these form networks that can communicate with one another. Users can exchange short messages, blog-style posts, music, and videos over these networks. Content you post is federated, meaning that once one network is aware of your content, it can pass that content to another network, which passes it to another, and so on.

Most platforms are run by a single company or organization, a single silo where your data is trapped. The only way to share with others is to have them join that service.

Federation allows users of different services to inter-operate with one another without creating an account for each shared resource.

Admins for each service instance can block other instances in case of egregious issues. Users can likewise block users or entire instances to improve their own experience.

Examples of Fediverse platforms

Mastodon is a Fediverse platform that has gotten a lot of attention lately, and it's focused on microblogging (similar to Twitter). Mastodon is only one component of the Fediverse, though. There's much, much more.

  • Microblogging: Mastodon, Pleroma, Misskey
  • Blogging:,
  • Video hosting: Peertube
  • Audio hosting: Funkwhale
  • Image hosting: Pixelfed
  • Link aggregator: Lemmy
  • Event planning: mobilizon,
History of the Fediverse

In 2008, Evan Prodromou created a microblogging service called using the Ostatus protocol and server software. A few years later, he changed his service to use a new protocol, called He released the Ostatus protocol to the Free Software Foundation, where it got incorporated into GNU/social. In this form, the fediverse continued along for several years.

In March 2016, Eugen Rochco (Gargron) created Mastodon, which used GNU/social with an interface similar to a popular Twitter interface called Tweetdeck. This gained some popularity.

Image by:

(Robert Martinez, CC BY-SA)

In 2018, a new protocol called ActivityPub was accepted as a standardized protocol by the W3C. Most Fediverse platforms have adopted it. It was authored by Evan Prodromou, Christine Lemmer-Weber, and others, and it expanded upon the previous services to provide a better and more flexible protocol.

What does the Fediverse look like?

The Fediverse, being made of any application using the ActivityPub protocol, is pretty diverse in appearance. As you might imagine, a microblogging platform has different requirements than a video sharing service.

It can be intimidating to wander into the great unknown, though. Here are some screenshots of my favorite federated services:

The Mastodon web client has a simplified view, as well as the advanced view, the simplified default view shows a single column of the Home feed, with options on the right to view more.

Image by:

(Bob Murphy, CC BY-SA 4.0)

The Advanced Web Interface, shown below, has the home timeline, local timeline, federated timeline, as well as a user's profile. When users first start, the easier one-column view is the default.

Image by:

(Bob Murphy, CC BY-SA 4.0)

Pixelfed has an interface focused around displaying images and videos:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Peertube is for sharing videos:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Mobilizon is an event planning site, with plans for Fediverse integration:

Image by:

(Bob Murphy, CC BY-SA 4.0)

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Switch to open source social

Ready to start? Check out for a nice video explanation and a subject-based way to find (self-selected) other users.

Go to for a comprehensive guide on how to get started, how to migrate your data, and more.

Mastodon has several great entry points:

For help deciding which instance to join (assuming you don't want to spin up your own just yet), visit

Are you a data nerd? Visit for stats, monitoring service, and a data-driven look at the known Fediverse.

Get federated

The Fediverse is a way to use the social media in an individualized way, either by choosing an instance with a community that suits your needs, or running your own server, and making it exactly the way you want. It avoids the advertising, algorithms, and other unpleasantries that plague many social networks.

If you are looking for a community that better suits your needs than the big silos, take a look, the Mastodon and the Fediverse may be a good fit for you. Get federated today.

You can find me at on the Fediverse.

A whirlwind tour of all the connected sites that form the world of open source social networks.

Image by:

Tools Alternatives SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Assess security risks in your open source project with Scorecard

Tue, 03/21/2023 - 15:00
Assess security risks in your open source project with Scorecard snaveen Tue, 03/21/2023 - 03:00

Software supply chain attacks are becoming increasingly common, and attackers are targeting vulnerabilities in dependencies early in the supply chain to amplify the impact of their attacks. Dependency security is very much in the spotlight. It’s important to stay informed about the software projects you rely upon. But when you’re a software developer, you’re likely using a lot of code from lots of different sources. It’s an intimidating prospect to try to keep up with all the code you include in your own project. That’s where the OpenSSF Scorecard comes in.

The OpenSSF’s Scorecard project is an automated tool that assesses a software project’s security practices and risks. According to a recent report by Sonatype, a Scorecard score was one of the best indicators of whether a project had known vulnerabilities. Adopting Scorecard is a great first step to understanding the reliability of the software you use and improving your software supply chain security.

Scorecard is a set of benchmarks that allows you to quickly assess the risk associated with a code project based on best security practices. The aggregated project score, ranging from 0 to 10, provides an indication of how seriously a project appears to take security. This is critical for identifying vulnerable points in your supply chain. A dependency that doesn’t meet your own internal security standards may be the weakest link in your software.

Examining the individual scores for each of the 19 different Scorecard metrics tells you whether a project’s maintainers follow the practices that are most important to you. Does the project require code review when contributors make changes? Are branches protected against unauthorized deletion or changes? Are dependencies pinned, so that compromised version updates cannot be pushed without review? The Scorecard’s granularity in scoring individual best practices is similar to a good restaurant review that answers the question, “do I want to eat here?” Moreover, Scorecard provides project maintainers with a to-do list of actionable steps to improve security.

Open Source Insights

You can use Scorecard to evaluate someone else’s software, or you can use it to improve your own.

To see a project’s score quickly, you can visit Open Source Insights. This site uses Scorecard data to report on the health of dependencies. For anything not covered on Open Source Insights, you can use the Scorecard command-line utility to scan any project on GitHub, or you can run Scorecard locally:

$ scorecard --local . --show-details --format json | jq .

You can run Scorecard on your Git server or on local development machines and trigger it to run with a Git hook.

GitHub Action

If your code is on GitHub, you can add the GitHub Scorecard Action to your repository. The GitHub Action runs a Scorecard scan after any repository change, so you get immediate feedback if a PR causes a regression in your project’s security. The results provide remediation tips and an indication of severity, enabling you to raise your score and secure your project.

Image by:

(Naveen Srinivasan, CC BY-SA 4.0)

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles Scorecard API

The Scorecard API is a powerful tool that allows you to assess the rigor of a large number of open source projects quickly and easily. With this API, you can check the scores of over 1.25 million GitHub repositories that are scanned weekly. The API provides a wealth of information about the security practices of each project, allowing you to quickly identify vulnerabilities and take action to protect your software supply chain. This data can also be used to automate the process of judging software, making it easy to ensure that your software is always secure and up to date. Whether you’re a project owner or a consumer of open source software, the Scorecard API is an essential tool for ensuring the security and reliability of your code.

When you’ve made progress in improving your score, don’t forget to add a badge to showcase your hard work.

Currently, the OpenSSF Scorecard is becoming widely adopted, and as one of its developers, I’m excited about the future. If you try it out, don’t hesitate to contact us through the contact section of the repository and share your feedback.

Join the Scorecard crowd

The Scorecard crowd is growing, and many users are already benefiting from the tool. According to Chris Aniszczyk, CTO of the Cloud Native Computing Foundation, “CNCF uses Scorecards in a variety of its projects to improve security practices across the cloud native ecosystem.”

OpenSSF Scorecard is an automated and practical tool that enables you to assess the security of open source software and take steps to improve your software supply chain security. It’s an essential tool for ensuring that the software you’re using is safe and reliable.

OpenSSF Scorecard helps to ensure your open source software is safe and reliable.

Image by:

SCaLE Security and privacy What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.