opensource.com

Subscribe to opensource.com feed
Updated: 2 hours 7 min ago

Build an interactive timeline in React with this open source tool

Wed, 11/23/2022 - 16:00
Build an interactive timeline in React with this open source tool Karol Kozer Wed, 11/23/2022 - 03:00

For several years, I worked in the TV online and video-on-demand (VOD) industry. While working on a scheduler web application, I realized that there were no good solutions for electronic program guides (EPG) and scheduling. Admittedly, this is a niche feature for most web developers, but it's a common requirement for TV applications. I've seen and analyzed a number of websites that have implemented their own EPG or timeline, and I often wondered why everyone seemed to be inventing their own solutions instead of working on a shared solution everyone could use. And that's when I started developing Planby.

Planby is a React (JavaScript) component to help you create schedules, timelines, and electronic program guides (EPG) for online TV and video-on-demand (VOD) services, music and sporting events, and more. Planby uses a custom virtual view, allowing you to operate on a lot of data, and present it to your viewers in a friendly and useful way.

Planby has a simple API that you can integrate with third party UI libraries. The component theme is customised to the needs of the application design.

Timeline performance

The most significant thing when implementing a timeline feature is performance. You're potentially handling basically an endless stream of data across many many different channels. Applications can struggle with refreshing, moving, and scrolling. You want the user's interactions with the content to be fluid.

There's also the potential for poor design. Sometimes, an app implements an EPG timeline in the form of a list that you must scroll vertically, meaning you must click on buttons to move left and right through time, which quickly becomes tiring. What's more, sometimes customization for interacting with an EPG (such as rating, choosing your favorite channels, reading right-to-left (RTL), and so on) aren't available at all, or when they are, they cause performance issues.

Another problem I often face is that an app is too verbose in its data transfer. When an app requests data while you scroll through the EPG, the timeline feels slow and can even crash.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications What is Planby?

This is where Planby comes in. Planby is built from scratch, using React and Typescript and a minimal amount of resources. It uses a custom virtual view, allowing you to operate on vast amounts of data. It displays programs and channels to the user, and automatically positions all elements according to hours and assigned channels. When a resource contains no content, Planby calculates the positioning so the time slots are properly aligned.

Planby has a simple interface and includes all necessary features, such as a sidebar, the timeline itself, a pleasant layout, and live program refreshing. In addition, there's an optional feature allowing you to hide any element you don't want to include in the layout.

Planby has a simple API that allows you as the developer to implement your own items along with your user preferences. You can use Planby's theme to develop new features, or you can make custom styles to fit in with your chosen design. You can easily integrate with other features, like a calendar, rating options, a list of user favorites, scroll, "now" buttons, a recording schedule, catch-up content, and much more. What's more, you can add custom global styles, including register-transfer level (RTL) functionality.

And best of all, it uses the open source MIT licensed.

Try Planby

If you would like to try Planby, or just to learn more about it, visit the Git repository. There, I've got some examples of what's possible and you can read the documentation for the details. The package is also available with npm.

Planby is a JavaScript component to help create schedules, timelines, and electronic program guides (EPG) for streaming services, music and sporting events, and more.

Image by:

Opensource.com

JavaScript What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Find bugs with the git bisect command

Tue, 11/22/2022 - 16:00
Find bugs with the git bisect command Dwayne McDaniel Tue, 11/22/2022 - 03:00

Have you ever found a bug in code and needed to know when it was first introduced? Chances are, whoever committed the bug didn't declare it in their Git commit message. In some cases, it might have been present for weeks, months, or even years, meaning you would need to search through hundreds or thousands of commits to find when the problem was introduced. This is the problem that git bisect was built to solve!

The git bisect command is a powerful tool that quickly checks out a commit halfway between a known good state and a known bad state and then asks you to identify the commit as either good or bad. Then it repeats until you find the exact commit where the code in question was first introduced.

Image by:

(Martin Grandjean, CC BY-SA 4.0)

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

This "mathmagical" tool works by leveraging the power of halving. No matter how many steps you need to get through, by looking at the halfway point and deciding if it is the new top or bottom of the list of commits, you can find any desired commit in a handful of steps. Even if you have 10,000 commits to hunt through, it only takes a maximum of 13 steps to find the first offending commit.

  1.  commit 1 bad <> commit 10,000 good => commit 5,000 is bad
  2.  commit 5,000 bad <> commit 10,000 good => commit 7,500 is good
  3.  commit 5,000 bad <> commit 7,500 good => commit 6,250 is good
  4.  commit 5,000 bad <> commit 6,250 good => commit 5,625 is bad
  5.  commit 5,625 bad <> commit 6,250 good => commit 5,938 is bad
  6.  commit 5,938 bad <> commit 6,250 good => commit 6,094 is good
  7.  commit 5,938 bad <> commit 6,094 good => commit 6,016 is bad
  8.  commit 6,016 bad <> commit 6,094 good => commit 6,055 is good
  9.  commit 6,016 bad <> commit 6,055 good => commit 6,036 is bad
  10.  commit 6036 bad <> commit 6055 good => commit 6046 is bad
  11.  commit 6,046 bad <> commit 6,055 good => commit 6,050 is bad
  12.  commit 6,050 bad <> commit 6,055 good => commit 6,053 is good
  13.  commit 6,053 bad <> commit 6,055 good => commit 6,054 is good

So, the first bad commit of the 10,000 is commit number 6,053. With git bisect, this would take a couple of minutes at maximum. I can't even imagine how long this would take to investigate crawling through each commit one at a time.

Using Git bisect

Using the git bisect command is very straightforward:

$ git bisect start
$ git bisect bad        # Git assumes you mean HEAD by default
$ git bisect good <ref> # specify a tag or commit ID for

Git checks out the commit in the middle and waits for you to declare either:

$ git bisect good
## or
$ git bisect bad

Then the bisect tool repeats checking out the commit halfway between good and bad commits until you tell it:

$ git bisect reset

Advanced users can even write scripts that determine good and bad states as well as any remediation actions to take upon finding the specific commit. You might not use the git bisect command every day of your life, but when you need it, it is a lifesaver.

Git's bisect tool saves time and energy by quickly identifying a bad commit.

Image by:

Pixabay, testbytes, CC0

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Introducing Rust calls to C library functions

Tue, 11/22/2022 - 16:00
Introducing Rust calls to C library functions Marty Kalin Tue, 11/22/2022 - 03:00

Why call C functions from Rust? The short answer is software libraries. A longer answer touches on where C stands among programming languages in general and towards Rust in particular. C, C++, and Rust are systems languages, which give programmers access to machine-level data types and operations. Among these three systems languages, C remains the dominant one. The kernels of modern operating systems are written mainly in C, with assembly language accounting for the rest. The standard system libraries for input and output, number crunching, cryptography, security, networking, internationalization, string processing, memory management, and more, are likewise written mostly in C. These libraries represent a vast infrastructure for applications written in any other language. Rust is well along the way to providing fine libraries of its own, but C libraries—​around since the 1970s and still growing—​are a resource not to be ignored. Finally, C is still the lingua franca among programming languages: most languages can talk to C and, through C, to any other language that does so.

Two proof-of-concept examples

Rust has an FFI (Foreign Function Interface) that supports calls to C functions. An issue for any FFI is whether the calling language covers the data types in the called language. For example, ctypes is an FFI for calls from Python into C, but Python doesn't cover the unsigned integer types available in C. As a result, ctypes must resort to workarounds.

By contrast, Rust covers all the primitive (that is, machine-level) types in C. For example, the Rust i32 type matches the C int type. C specifies only that the char type must be one byte in size and other types, such as int, must be at least this size; but nowadays every reasonable C compiler supports a four-byte int, an eight-byte double (in Rust, the f64 type), and so on.

There is another challenge for an FFI directed at C: Can the FFI handle C's raw pointers, including pointers to arrays that count as strings in C? C does not have a string type, but rather implements strings as character arrays with a non-printing terminating character, the null terminator of legend. By contrast, Rust has two string types: String and &str (string slice). The question, then, is whether the Rust FFI can transform a C string into a Rust one—​and the answer is yes.

Pointers to structures also are common in C. The reason is efficiency. By default, a C structure is passed by value (that is, by a byte-per-byte copy) when a structure is either an argument passed to a function or a value returned from one. C structures, like their Rust counterparts, can include arrays and nest other structures and so be arbitrarily large in size. Best practice in either language is to pass and return structures by reference, that is, by passing or returning the structure's address rather than a copy of the structure. Once again, the Rust FFI is up to the task of handling C pointers to structures, which are common in C libraries.

The first code example focuses on calls to relatively simple C library functions such as abs (absolute value) and sqrt (square root). These functions take non-pointer scalar arguments and return a non-pointer scalar value. The second code example, which covers strings and pointers to structures, introduces the bindgen utility, which generates Rust code from C interface (header) files such as math.h and time.h. C header files specify the calling syntax for C functions and define structures used in such calls. The two code examples are available on my homepage.

Calling relatively simple C functions

The first code example has four Rust calls to C functions in the standard mathematics library: one call apiece to abs (absolute value) and pow (exponentiation), and two calls to sqrt (square root). The program can be built directly with the rustc compiler, or more conveniently with the cargo build command:

use std::os::raw::c_int;    // 32 bits
use std::os::raw::c_double; // 64 bits

// Import three functions from the standard library libc.
// Here are the Rust declarations for the C functions:
extern "C" {
    fn abs(num: c_int) -> c_int;
    fn sqrt(num: c_double) -> c_double;
    fn pow(num: c_double, power: c_double) -> c_double;
}

fn main() {
    let x: i32 = -123;
    println!("\nAbsolute value of {x}: {}.",
             unsafe { abs(x) });

    let n: f64 = 9.0;
    let p: f64 = 3.0;
    println!("\n{n} raised to {p}: {}.",
             unsafe { pow(n, p) });

    let mut y: f64 = 64.0;
    println!("\nSquare root of {y}: {}.",
             unsafe { sqrt(y) });
    y = -3.14;
    println!("\nSquare root of {y}: {}.",
             unsafe { sqrt(y) }); //** NaN = NotaNumber
}

The two use declarations at the top are for the Rust data types c_int and c_double, which match the C types int and double, respectively. The standard Rust module std::os::raw defines fourteen such types for C compatibility. The module std::ffi has the same fourteen type definitions together with support for strings.

The extern "C" block above the main function then declares the three C library functions called in the main function below. Each call uses the standard C function's name, but each call must occur within an unsafe block. As every programmer new to Rust discovers, the Rust compiler enforces memory safety with a vengeance. Other languages (in particular, C and C++) do not make the same guarantees. The unsafe block thus says: Rust takes no responsibility for whatever unsafe operations may occur in the external call.

The first program's output is:

Absolute value of -123: 123.
9 raised to 3: 729
Square root of 64: 8.
Square root of -3.14: NaN.

In the last output line, the NaN stands for Not a Number: the C sqrt library function expects a non-negative value as its argument, which means that the argument -3.14 generates NaN as the returned value.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Calling C functions involving pointers

C library functions in security, networking, string processing, memory management, and other areas regularly use pointers for efficiency. For example, the library function asctime (time as an ASCII string) expects a pointer to a structure as its single argument. A Rust call to a C function such as asctime is thus trickier than a call to sqrt, which involves neither pointers nor structures.

The C structure for the asctime function call is of type struct tm. A pointer to such a structure also is passed to library function mktime (make a time value). The structure breaks a time into units such as the year, the month, the hour, and so forth. The structure's fields are of type time_t, an alias for for either int (32 bits) or long (64 bits). The two library functions combine these broken-apart time pieces into a single value: asctime returns a string representation of the time, whereas mktime returns a time_t value that represents the number of elapsed seconds since the epoch, which is a time relative to which a system's clock and timestamp are determined. Typical epoch settings are January 1 00:00:00 (zero hours, minutes, and seconds) of either 1900 or 1970.

The C program below calls asctime and mktime, and uses another library function strftime to convert the mktime returned value into a formatted string. This program acts as a warm-up for the Rust version:

#include
#include

int main () {
  struct tm sometime;  /* time broken out in detail */
  char buffer[80];
  int utc;

  sometime.tm_sec = 1;
  sometime.tm_min = 1;
  sometime.tm_hour = 1;
  sometime.tm_mday = 1;
  sometime.tm_mon = 1;
  sometime.tm_year = 1;
  sometime.tm_hour = 1;
  sometime.tm_wday = 1;
  sometime.tm_yday = 1;

  printf("Date and time: %s\n", asctime(&sometime));

  utc = mktime(&sometime);
  if( utc < 0 ) {
    fprintf(stderr, "Error: unable to make time using mktime\n");
  } else {
    printf("The integer value returned: %d\n", utc);
    strftime(buffer, sizeof(buffer), "%c", &sometime);
    printf("A more readable version: %s\n", buffer);
  }

  return 0;
}

The program outputs:

Date and time: Fri Feb  1 01:01:01 1901
The integer value returned: 2120218157
A more readable version: Fri Feb  1 01:01:01 1901

In summary, the Rust calls to library functions asctime and mktime must deal with two issues:

  • Passing a raw pointer as the single argument to each library function.

  • Converting the C string returned from asctime into a Rust string.

Rust calls to asctime and mktime

The bindgen utility generates Rust support code from C header files such as math.h and time.h. In this example, a simplified version of time.h will do but with two changes from the original:

  • The built-in type int is used instead of the alias type time_t. The bindgen utility can handle the time_t type but generates some distracting warnings along the way because time_t does not follow Rust naming conventions: in time_t an underscore separates the t at the end from the time that comes first; Rust would prefer a CamelCase name such as TimeT.

  • The type struct tm type is given StructTM as an alias for the same reason.

Here is the simplified header file with declarations for mktime and asctime at the bottom:

typedef struct tm {
    int tm_sec;    /* seconds */
    int tm_min;    /* minutes */
    int tm_hour;   /* hours */
    int tm_mday;   /* day of the month */
    int tm_mon;    /* month */
    int tm_year;   /* year */
    int tm_wday;   /* day of the week */
    int tm_yday;   /* day in the year */
    int tm_isdst;  /* daylight saving time */
} StructTM;

extern int mktime(StructTM*);
extern char* asctime(StructTM*);

With bindgen installed, % as the command-line prompt, and mytime.h as the header file above, the following command generates the required Rust code and saves it in the file mytime.rs:

% bindgen mytime.h > mytime.rs

Here is the relevant part of mytime.rs:

/* automatically generated by rust-bindgen 0.61.0 */

#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct tm {
    pub tm_sec: ::std::os::raw::c_int,
    pub tm_min: ::std::os::raw::c_int,
    pub tm_hour: ::std::os::raw::c_int,
    pub tm_mday: ::std::os::raw::c_int,
    pub tm_mon: ::std::os::raw::c_int,
    pub tm_year: ::std::os::raw::c_int,
    pub tm_wday: ::std::os::raw::c_int,
    pub tm_yday: ::std::os::raw::c_int,
    pub tm_isdst: ::std::os::raw::c_int,
}

pub type StructTM = tm;

extern "C" {
    pub fn mktime(arg1: *mut StructTM) -> ::std::os::raw::c_int;
}

extern "C" {
    pub fn asctime(arg1: *mut StructTM) -> *mut ::std::os::raw::c_char;
}

#[test]
fn bindgen_test_layout_tm() {
    const UNINIT: ::std::mem::MaybeUninit<tm> =
       ::std::mem::MaybeUninit::uninit();
    let ptr = UNINIT.as_ptr();
    assert_eq!(
        ::std::mem::size_of::<tm>(),
        36usize,
        concat!("Size of: ", stringify!(tm))
    );
    ...

The Rust structure struct tm, like the C original, contains nine 4-byte integer fields. The field names are the same in C and Rust. The extern "C" blocks declare the library functions asctime and mktime as taking one argument apiece, a raw pointer to a mutable StructTM instance. (The library functions may mutate the structure via the pointer passed as an argument.)

The remaining code, under the #[test] attribute, tests the layout of the Rust version of the time structure. The test can be run with the cargo test command. At issue is that C does not specify how the compiler must lay out the fields of a structure. For example, the C struct tm starts out with the field tm_sec for the second; but C does not require that the compiled version has this field as the first. In any case, the Rust tests should succeed and the Rust calls to the library functions should work as expected.

Getting the second example up and running

The code generated from bindgen does not include a main function and, therefore, is a natural module. Below is the main function with the StructTM initialization and the calls to asctime and mktime:

mod mytime;
use mytime::*;
use std::ffi::CStr;

fn main() {
    let mut sometime  = StructTM {
        tm_year: 1,
        tm_mon: 1,
        tm_mday: 1,
        tm_hour: 1,
        tm_min: 1,
        tm_sec: 1,
        tm_isdst: -1,
        tm_wday: 1,
        tm_yday: 1
    };

    unsafe {
        let c_ptr = &mut sometime; // raw pointer

        // make the call, convert and then own
        // the returned C string
        let char_ptr = asctime(c_ptr);
        let c_str = CStr::from_ptr(char_ptr);
        println!("{:#?}", c_str.to_str());

        let utc = mktime(c_ptr);
        println!("{}", utc);
    }
}

The Rust code can be compiled (using either rustc directly or cargo) and then run. The output is:

Ok(
    "Mon Feb  1 01:01:01 1901\n",
)
2120218157

The calls to the C functions asctime and mktime again must occur inside an unsafe block, as the Rust compiler cannot be held responsible for any memory-safety mischief in these external functions. For the record, asctime and mktime are well behaved. In the calls to both functions, the argument is the raw pointer ptr, which holds the (stack) address of the sometime structure.

The call to asctime is the trickier of the two calls because this function returns a pointer to a C char, the character M in Mon of the text output. Yet the Rust compiler does not know where the C string (the null-terminated array of char) is stored. In the static area of memory? On the heap? The array used by the asctime function to store the text representation of the time is, in fact, in the static area of memory. In any case, the C-to-Rust string conversion is done in two steps to avoid compile-time errors:

  1. The call Cstr::from_ptr(char_ptr) converts the C string to a Rust string and returns a reference stored in the c_str variable.

  2. The call to c_str.to_str() ensures that c_str is the owner.

The Rust code does not generate a human-readable version of the integer value returned from mktime, which is left as an exercise for the interested. The Rust module chrono::format includes a strftime function, which can be used like the C function of the same name to get a text representation of the time.

Calling C with FFI and bindgen

The Rust FFI and the bindgen utility are well designed for making Rust calls out to C libraries, whether standard or third-party. Rust talks readily to C and thereby to any other language that talks to C. For calling relatively simple library functions such as sqrt, the Rust FFI is straightforward because Rust's primitive data types cover their C counterparts.

For more complicated interchanges—​in particular, Rust calls to C library functions such as asctime and mktime that involve structures and pointers—​the bindgen utility is the way to go. This utility generates the support code together with appropriate tests. Of course, the Rust compiler cannot assume that C code measures up to Rust standards when it comes to memory safety; hence, calls from Rust to C must occur in unsafe blocks.

The Rust FFI and the bindgen utility are well designed for making Rust calls out to C libraries. Rust talks readily to C and thereby to any other language that talks to C.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Programming Rust What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Learn Git: 3 commands to level up your skill

Mon, 11/21/2022 - 16:00
Learn Git: 3 commands to level up your skill Dwayne McDaniel Mon, 11/21/2022 - 03:00

When I talk to people about Git, almost everyone has a strong reaction to the git rebase command. This command has caused many people to change directory, remove the repository, and just re-clone to start over. I think this comes from misconceptions about how branching actually works, a pretty terrible default interface, and some merge conflicts mucking things up.

Git squash

If you've ever made a lot of commits locally and wish there was a way to smash them all down into a single commit, you're in luck. Git calls this concept "squashing commits." I discovered the concept while working on documentation. It took me over a dozen commits to finally get a bit of markdown just right. The repo maintainer didn't want to see all my attempts cluttering up the project's history, so I was told to "just git squash your commits."

Squashing sounded like a solid plan. There was just one issue. I didn't know how to do it. As someone new to Git, I did what anyone would do. I consulted the manual for squash and immediately hit a snag:

$ man git-squash
> No manual entry for git-squash

It turns out I wasn't being told to run a Git command called squash, I was being asked to run an entirely separate command that would, in the end, combine all my commits into one. This is a common scenario: someone who has been using a tool for a while uses jargon or refers to a concept, which to them is absolutely clear, but isn't obvious to someone new to the tech.

Conceptually it would look like this:

Image by:

Photos by Dan Burton on Unsplash

I'm laying it out this way to encourage you that you are definitely not the first or last person that would be confused by Git or someone talking about Git. It's OK to ask for clarification and for help finding the right documentation. What that docs maintainer actually meant was, "use Git rebase to squash the commits into one."

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Git rebase

The git rebase command removes a chain of commits away from its first parent and places it at the end of another chain of commits, combining both chains of commits into one long chain, instead of two parallel chains. I realize that's a dense statement.

If you think back to how Git commits are chained together, you can see that any branch aside from your initial main branch has a parent commit that serves as the "base" of that chain. Rebasing is the act of making the last commit in another chain the new "base" commit for a specified branch.

You might already be more familiar with Git merge. Take a look at how the git-scm.com site explains the difference:

Image by:

(Git-scm.com, CC BY-SA 3.0)

In this example merge, Git preserves the chain of commits shown in the image as C4, which has a parent of C2, when combining the changes in C3 to make a whole new commit, C5. The branch pointer for "experiment" still exists and still points at C4.

The rebase in this example shows a similar situation of C4 first existing as a separate branch with a parent of C2. But then, instead of merging with the code of C3, it makes C3 the new parent of C4, resulting in a new commit called C4. Notably, the branch pointer "main" has not moved yet. To make Git move the pointer to the end of the chain, currently pointed at by "experiment", you also need to perform a merge.

Rebase is not a replacement for merge. It's a tool for making cleaner histories to be used in conjunction with merge.

Interactive rebase is your best friend!

One of the scariest parts of performing a rebase from the command line is the horrifying interface. Running the command git rebase either works or blows up. There's not a lot of feedback or way to ensure it is doing what you precisely want. Fortunately, the rebase command and many other Git commands have an interactive mode, which you can invoke with the parameter -i' or –interactive`.

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

When invoking interactive mode, rebase transforms from a terrifying black box into a menu of options that let you do several things to the chain of commits you are rebasing. For every commit, you can choose to:

  • Pick: Include it as is

  • Reword: Rewrite the commit message

  • Edit: Make further changes to the files in the commit before the rebase finishes

  • Squash: Smash multiple commits into one commit, keeping all commit messages

  • Fixup: Smash multiple commits into one commit, but just keep the last commit message

  • Drop: Discard this commit

I personally like the way that the open source GitLens extension for VS Code lays out the options with dropdown picklists, but Git lets you assign these options using any editor. For text-only tools like Emacs or Vim, you need to type out the selection rather than pick from a menu, but the end result is still the same.

When to rebase

Knowing when to rebase is as important as knowing how to rebase. In truth, if you don't care about your repos histories being a bit messy, then you might never perform a rebase. But if you do want to make cleaner histories and have fewer commits cluttering up your graph view, then there is one clear rule of thumb to always keep in mind:

"Do not rebase commits that exist outside your repository and that people may have based work on."

If you follow that guideline, you'll be fine.

Simply put, if you make a local branch to do your work, feel free to rebase it all you want. But as soon as that branch is pushed, do not rebase it. It is really up to you.

Hopefully you found this helpful in understanding how the git rebase command works and can use it with more confidence. As with any Git command, practice is the only real way to learn and understand what is going on. I encourage you to brave and experiment with interactive rebase!

Git cherry-pick

Most developers have committed work only to realize they have been working on the wrong branch. Ideally, they could just pick up that one commit and move it over to the right branch. That is exactly what git cherry-pick does.

Cherry-picking is the art of rebasing single commits. This was so common of a pattern that they gave it its own command.

Image by:

(Crossroadsphotototeam, CC BY-SA 2.0)

To perform a cherry pick, you simply tell Git the ID of the commit you want to move to "here", where HEAD is pointing:

$ git cherry-pick <target-ref>

Should something go wrong, it's straightforward to recover, thanks to the error messages that Git provides:

$ git cherry-pick -i 2bc01cd
Auto-merging README.md
CONFLICT (content): Merge conflict in README.md
error: could not apply 2bc01cd… added EOF lines
hint: After resolving the conflicts, mark them with
hint: "git add/rm ", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".
$ git cherry-pick --abortGit more power

The git rebase command is a powerful part of the Git utility. It's probably best to practice using it with a test repo before you have to use it under stress, but once you're familiar with its concepts and workflow, you can help provide a clear history of a repository's development.

Learn how to use git squash, git rebase, and git cherry-pick.

Image by:

Jonas Leupe on Unsplash

Git What to read next My favorite Git tools Git concepts in less than 10 minutes This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 Git tips for technical writers

Mon, 11/21/2022 - 16:00
7 Git tips for technical writers Maximilian Kolb Mon, 11/21/2022 - 03:00

As a technical writer working for ATIX, my tasks include creating and maintaining documentation for Foreman at github.com/theforeman/foreman-documentation. Git helps me track versions of content, and to collaborate with the open source community. It's an integral part of storing the results of my work, sharing it, and discussing improvements. My main tools include my browser, OpenSSH to connect to Foreman instances, Vim to edit source files, and Git to version content.

This article focuses on recurring challenges when taking the first steps with Git and contributing to Foreman documentation. This is meant for intermediate Git users.

Prerequisites
  • You have installed and configured Git on your system. You must at least set your user name and email address.

  • You have an account on github.com. GitHub isn't an open source project itself, but it's the site where many open source Git repositories are stored (including Foreman's documentation.)

  • You have forked the foreman-documentation repository into your own account or organization (for example, github.com/My_Name/foreman-documentation. For more information, see A step-by-step guide to Git by Kedar Vijay Kulkarni.

  • You have added your SSH public key to GitHub. This is necessary to push your changes to GitHub. For more information, see A short introduction to GitHub by Nicole C. Baratta.

Contributing to Foreman documentation

Foreman is an open source project and thrives on community contributions. The project welcomes everyone and there are only a few requirements to make meaningful contributions. Requirements and conventions are documented in the README.md and CONTRIBUTING.md files.

Here are some of the most frequent tasks when working on Foreman documentation.

I want to start working on Foreman documentation
  1. Clone the repository from github.com:

    $ git clone git@github.com:theforeman/foreman-documentation.git
    $ cd foreman-documentation/
  2. Rename the remote:

    $ git remote rename origin upstream
  3. Optional: Ensure that your local master branch is tracking the master branch from the foreman-documentation repository from the theforeman organization:

    $ git status

    This automatically starts you on the latest commit of the default branch, which in this case is master.

  4. If you do not have a fork of the repository in your own account or organization already, create one.

    Go to github.com/theforeman/foreman-documentation and click Fork.

  5. Add your fork to your repository.

    $ git remote add github git@github.com:My_Name/foreman-documentation.git

    Your local repository now has two remotes: upstream and github.

I want to extend the Foreman documentation

For simple changes such as fixing a spelling mistake, you can create a pull request (PR) directly.

  1. Create a branch named, for example, fix_spelling. The git switch command changes the currently checked out branch, and -c creates the branch:

    $ git switch -c fix_spelling
  2. Make your change.

  3. Add your change and commit:

    $ git add guides/common/modules/abc.adoc
    $ git commit -m "Fix spelling of existing"

    I cannot emphasise the importance of good Git commit messages enough. A commit message tells the project maintainers what you have done, and because it's preserved along with the rest of the codebase, it serves as a historical footnote when someone's looking back through code to determine what's happened over its lifespan.

  4. Optional but recommended: View and verify the diff to the default branch. The default branch for foreman-documentation is called master, but other projects may name theirs differently (for example, main, dev, or devel.)

    $ git diff master
  5. Push your branch to Github. This publishes your change to your copy of the codebase.

    $ git push --set-upstream github fix_spelling
  6. Click on the link provided by Git in your terminal to create a pull request (PR).

    remote: Create a pull request for 'fix_spelling' on Github by visiting:
    remote:      https://github.com/_My_User_Account_/foreman-documentation/pull/new/fix_spelling
  7. Add an explanation on why the community should accept your change. This isn't necessary for a trivial PR, such as fixing a spelling mistake, but for major changes it's important.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles I want to rebase my branch to master.
  1. Ensure your local master branch tracks the master branch from github.com/theforeman/foreman-documentation, not foreman-documentation in your own namespace:

    $ git switch master

    This should read Your branch is up to date with 'upstream/master', with upstream being the name of your remote repository pointing to github.com/theforeman/foreman-documentation. You can review your remotes by running git remote -v.

  2. Fetch possible changes from your remote. The git fetch command downloads the tracked branch from your remote, and the --all option updates all branches simultaneously. This is necessary when working with additional branches. The --prune option removes references to branches that no longer exist.

    $ git fetch --all --prune
  3. Pull possible changes from upstream/master into your local master branch. The git pull command copies commits from the branch you're tracking into your current branch. This is used to "update" your local master branch to the latest state of the master branch in your remote (Github, in this case.)

    $ git pull
  4. Rebase your branch to "master".

    $ git switch my_branch
    $ git rebase -i master
I have accidentally committed to master
  1. Create a branch to save your work:

    $ git switch -c my_feature
  2. Switch back to the master branch:

    $ git switch master
  3. Drop the last commit on master:

    $ git reset --soft HEAD~1
  4. Switch back to my_feature branch and continue working:

    $ git switch my_feature
I want to reword my commit message
  1. If you only have one commit on your branch, use git amend to change your last commit:

    $ git commit --amend

    This assumes that you don't have any other files added to your staging area (that is, you did not run git add My_File without also committing it.)

  2. Push your "change" to Github, using the --force option because the Git commit message is part of your existing commit, so you're changing the history on your branch.

    $ git push --force
I want to restructure multiple changes on a single branch
  1. Optional but strongly recommended: Fetch changes from Github.

    $ git switch master
    $ git fetch
    $ git pull

    This ensures that you directly incorporate any other changes into your branch in the order they've been merged to master.

  2. To restructure your work, rebase your branch and make changes as necessary. Rebasing to master means changing the parent commit of your first commit on your branch:

    $ git rebase --interactive master

    Replace the first word pick to modify the commit.

    • Use e to make actual changes to your commit. This interrupts your rebase!

    • Use f to combine a commit with its parent.

    • Use d to completely remove the commit from your branch.

    • Move the lines to change the order of your changes.

      After successfully rebasing, your own commits are on top of the last commit from master.

I want to copy a commit from another branch
  1. Get the commit ID from a stable branch (for example, a branch named 3.3), using the -n option to limit the number of commits.

    $ git log -n 5 3.3
  2. Replicate changes by cherry-picking commits to your branch. The -x option adds the commit ID to your commit message. This is only recommended when cherry-picking commits from a stable branch.

    $ git switch My_Branch
    $ git cherry-pick -x Commit_ID
More tips

At ATIX, we run a GitLab instance to share code, collaborate, and automate tests and builds internally. With the open source community surrounding the Foreman ecosystem, we rely on Github.

I recommend that you always point the remote named origin in any Git repository to your internal version control system. This prevents leaking information to external services when doing a git push based on pure muscle memory.

Additionally, I recommend using a fixed naming scheme for remotes. I always name the remote pointing to my own GitLab instance origin, the open source project upstream, and my fork on Github github.

For foreman-documentation, the repository has a relatively flat history. When working with a more complex structure, I tend to think of Git repositories in a very visual way with nodes (commits) pointing to nodes on lines (branches) that potentially intertwine. Graphical tools such as gitk or Git Cola can help visualize your Git history. Once you have fully grasped how Git works, you can move on to aliases, if you prefer the command line.

Before a big rebase with a lot of expected merge conflicts, I recommend creating a "backup" branch that you can quickly view diffs against. Note that it's pretty hard to irreversibly delete commits, so play around in your local Git repository before making big changes.

Git for tech writers

Git is a tremendous help for technical writers. Not only can you use Git to version your own content, but you can actively collaborate with others.

Follow along with this demo of how I use Git to write documentation for Foreman.

Image by:

Jonas Leupe on Unsplash

Git Documentation What to read next How writers can get work done better with Git This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get verified on Mastodon with WordPress

Sun, 11/20/2022 - 16:00
Get verified on Mastodon with WordPress Seth Kenlon Sun, 11/20/2022 - 03:00

As users migrate away from Twitter, many wonder what the equivalent of the famous blue checkmark is on Mastodon. Ignoring debates about how anyone can be sure of anyone's true identity online, it's easy to verify yourself on Mastodon when you have a WordPress site.

1. Get your verification code

Sign in to your Mastodon account and click the edit profile link under your Mastodon handle.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

In the Edit Profile screen that appears, scroll down to the Verification section. This section contains a special verification code. Click the Copy button to copy your verification code to your clipboard.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives 2. Paste your code on your WordPress site

To assure people that the same person running your Mastodon account also runs your website, you must paste your verification code into your WordPress site. You can create a post to serve especially for this purpose, or you can put it into your site's footer, or you can make it a social link.

The easiest method is to create a special post to serve as your verification page. In your WordPress dashboard, click the Posts link in the left panel, and then select Add New.

In the post editor, give your post a title, and then type the word "Mastodon" into the body. You can type more than just the word "Mastodon," of course. You might want to type a nice post about your new social network, inviting people to join you there and follow you. Or you might keep it simple. It's up to you.

Once you've got some text, hover your mouse over the text block in WordPress until a pop-up toolbar appears. Click the three-dot icon ("Options") and select Edit as HTML.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

This exposes the HTML code inside the text block. Don't worry, you don't need to know how to code for this to work. Select the word "Mastodon" in the text block, and then paste your Mastodon verification code in its place.

Click the Update or Publish button in the top right corner of the WordPress editor interface. Your post must be published for Mastodon to detect your link.

3. Add your post URL to Mastodon

Once your verification post is published, visit the page and copy the web address from your web browser's URL bar. For example, the sample post I wrote is located at http://myexamplewebsite.tk/wp/uncategorized/i-joined-mastodon.

In your browser, return to the Edit Profile screen of Mastodon. Locate the Profile Metadata section, and type Website into the Label field and then paste the URL of your verification post in the Content field.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Click the Save Changes button and return to your profile page to see your newly verified status.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

On some Mastodon servers, it seems that verification may take an hour or so to resolve, but on two of the three I've tested, the green checkmark appeared immediately after saving.

Verifiably cool

Strictly speaking, the word "Mastodon" in the verification code is arbitrary. In this article, I've encouraged using that because it's the default and most obvious word to link back to your Mastodon profile. However, if you're interested in the technical aspect, then what really matters is that a link to your profile page, with the rel="me" attribute, appears on a page of a website you control. The contents of the link don't actually matter.

A green checkmark on Mastodon indicates proof that the same person controlling your Mastodon account also controls your website. Ultimately, that's as good as "identification" gets on the Internet. Blue checkmarks might look and feel official, but identification services online can't ever guarantee that a person at a computer is the same person from one day to the next. Mastodon acknowledges this and offers verification options for every user. With a no-cost WordPress blog and a no-cost Mastodon account, you can demonstrate to your friends that the same person (you) posting on the blog is the same person active on social media.

Three steps to verify your Mastodon account with WordPress.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Alternatives WordPress What to read next Get verified on Mastodon with your website This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

17 open source technologists share their favorite keyboards

Sat, 11/19/2022 - 16:00
17 open source technologists share their favorite keyboards Opensource.com Sat, 11/19/2022 - 03:00

Keyboards are necessary to work with a computer system whether it's for coding, writing, or moving around items in a spreadsheet. They allow access to a computer's peripherals and are used to get deep into the operating system of any computer. Keyboards come in all shapes and sizes. Some are more comfortable to use than others. We asked our community members to share the best (and the worst) keyboard they'd ever used. Some of the answers might surprise you!

Top 4 favorite keyboards

Keyboards rank right after editors and languages on the official list of things that programmers argue about.

My favorite keyboards:

  • NeXT Extended Keyboard: With the pipe in the right place.
  • Apple Macintosh II Extended Keyboard: Those buckling springs felt great!
  • IBM Model M: Oh what a joyous noise!
  • Tokyo 60 HHKB (Happy Hacking KeyBoard) kit: Just the keyboard I always wanted.

Erik O'Shaughnessy

Two-handed layout

Probably the most obscure keyboard would be the Maltron one-handed keyboard I ended up using for several months while recovering from RSI—it was actually really good to use once you learn where the keys are!
 

Image by:

(Ruth Cheesley CC BY-SA 4.0)

 
My favorite keyboard of all time is the one I currently have, the Kinesis Advantage2 LF. I have mapped it as close as I could get to the Maltron two-handed layout (which is a bajillion times more efficient, once you have re-mapped your brain to a different layout). Took me about a year to be fully efficient, but I can still use my hands so it was worth the hard work!

Image by:

(Ruth Cheesley, CC BY-SA 4.0)

 
I wrote about becoming a bilingual typist here and did a mini video series charting my progress in learning one-handed typing (while learning how to record videos for work!) which is on YouTube.

Ruth Cheesley

All the feels

I bought a Logitech MX Keys wireless keyboard at the beginning of the pandemic and I just love the feel and responsiveness of the keys when I type. It's by far my favorite keyboard of all time.

Will Kelly

Plug-and-play Image by:

(Miriam Goldman CC BY-SA 4.0)

The best keyboard is my current one. It's a Logitech K850 and it pairs with a mouse. It's wireless and isn't a huge drain on batteries. It comes with a nice little pad for the heel of your hand, and if you prop it up with the stand, it ends up being at the perfect height. I don't have much time to spend configuring my peripherals, so having this be plug-and-play is fantastic.

Miriam Goldman

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Nostalgic keyboards

Probably my favorite quirky keyboard to use is a DIY 7-key chorder based on this design (great for wearable computing projects where you want a fully functional keyboard that leaves one of your hands free). You can find some images from mine at various stages of prototyping, along with BoM and open (GPL 3.0) schematics and PCB etch here.

I also have a nostalgic fondness for the IBM Model M. You need a particular kind of powered PS/2 to USB converter in order to use one on modern machines.

I've always wanted a Symbolics Lisp "Space-cadet" but they're quite hard to come by. Perhaps someday…

While it's not even remotely cheap, I love my 3" RGB underlit SUZOHAPP trackball. It's like I'm playing Centipede at my workstation!

I wired it up with three RGB backlit arcade buttons (available from the same supplier) so that it operates just like a 3-button mouse in my XWindows session.

Jeremy Stanley

The best keyboard

I have a Vortex Race 3 with Cherry MX Silver switches and it is the best keyboard in the world (in my opinion). My only complaint is that it has an oversized Esc key, so it's hard (basically impossible) to find fun new keycap sets for it. If I ever upgrade, it will be to a similar keyboard with half-height throws and the same switches.

Deb Richardson

Cheap and cheerful

The best cheap, cheerful modern keyboard I've found is the A4TECH KV-300H. It weighs more than most keyboards and gives the closest feel to a laptop. It has a built-in USB hub too!

Leigh Morresi

Curvy keyboards

I always loved my original Microsoft Natural keyboard. The Microsoft hardware division did a great job with that. It was rock solid, and the curved shape meant my fingers and wrists were spared repetitive strain injury. Mine was an original Natural Elite keyboard, with the PS/2 mini-DIN connector and USB adapter. Despite taking great care of this keyboard, and it taking care of me, the keyboard finally died in 2018.

Image by:

(Jim Hall, CC BY-SA 4.0)

I replaced it with a Perixx PERIBOARD-512 keyboard. This is very similar to the Microsoft Natural Elite keyboard, so my fingers didn't have to re-learn a keyboard layout. I bought one in white and another in black, but I use the black one most of the time because of my black desk mat.

Image by:

(Jim Hall, CC BY-SA 4.0)

When I want to feel really retro, I dig out my IBM Model M keyboard. I don't have an original Model M anymore, but I do own a very good reproduction from Unicomp. I bought it in 2010 and it's a tank. I could fight off a zombie horde with that, and later use it to write another article.

Jim Hall

Left-handed

As a left-hander, I think ALL keyboards are the worst. What good is having a numeric keypad on the right-hand side of the keyboard when you're left-handed? Even those keyboards that don't have a keypad still put the arrow keys on the lower right. Yes, some mouses are made hand-neutral and some forward-thinking companies have even made left-handed mouses. It might take some time before a left-handed keyboard is made.

Gary Smith

The best and the worst

The best keyboard: Microsoft Natural Keyboard Elite or a Thinkpad keyboard with trackpoint.

The worst keyboard: the onscreen keyboard on my phone now (soooooo many typos).

John 'Warthog9' Hawley

Keyboard loyalty

I am going to buck the Model M and mechanical keyboard trend. Yes, they are great, and yes I really liked them when I first started using them. 

Like Jim, I got one of the Microsoft Natural keyboards when they came out—and when I needed to replace it, I picked up the Logitech model that the Microsoft one was based on. I've been pretty loyal to Logitech since. I upgraded to the K350 Wave when it came out and it was time to go wireless. This last time, I upgraded to the ERGO K860, and I LOVE IT. 

I'm also a big fan of trackballs when not using a touchpad, and currently use an MX ERGO (ever since they discontinued my beloved M570).

Kevin Sonney

Ergonomics is key Image by:

(Kelly Dassing CC BY-SA 4.0)

When it was time to replace my keyboard in 2021, I had very specific requirements in mind. As someone with hypermobile joints and chronic wrist pain, an ergonomic keyboard became the obvious choice. I waded through several options before landing on the Logitech ERGO K860. Its large, padded wrist rest, adjustable height front feet, and chiclet-style keys make for the most comfortable keyboard I've ever used. It took a little while to get accustomed to the angled, separated layout, but now I much prefer it to "standard" keyboards.

In contrast, the worst keyboard I ever used was your average, tall and loud key Logitech keyboard. It just wasn't comfortable, and its responsiveness was unreliable. I'll never go back.

Kelly Dassing

Sentimental keyboard

This is my favorite keyboard for sentimental reasons:

Image by:

(Seth Morabito, CC BY-SA 2.0)

Greg Scott

IBM Model M

The best is simply the IBM Model M, although I won a Das Keyboard recently, and it is pretty good.

As far as the worst, there is a myriad of horrible squishy keyboards out there, and most of them are terrible.

Bob Murphy

Gamer-proof keyboard

I only use thumb-based, wireless trackballs, and all of them are off-brand. I avoid anything else. I am a fan of Logitech (who pioneered the design) in general, but Logitech is overpriced, and a lot of the off-brand designs have lithium batteries that charge with USB-C cable, whereas Logitech still makes me insert a AA battery.

For keyboards, all I use now are wired, brown switch mechanicals (sort of quiet but not really, and tactile switches.) I love the feel and can type for hours and hours with them. I have a lighted (not RGB) version, which I really like, too, and it's very off-brand and cheap. True mechanicals don't have to be expensive if you're not gaming with them. The expensive ones for gaming are built to be thrown across the room with force and survive when you die for the hundredth time on some stupidly tough boss. But I just use them for typing. It is hard to find ones that aren't all rainbow-colored, because most are built for gamers.

Evan "Hippy" Slatis

DIY keyboard

The worst keyboards are most of them. Especially the ones you tend to get as a new employee, the cheapest ones are from your computer's manufacturer. Mushy keys and way too big for the desks the company provides you.

I started to buy (and build) my own keyboards and it was a revelation. As I have small hands and short fingers, I really enjoy ortholinear keyboards. I ended up building a Planck which I still love. I use Brown Cherry switches. I also use blank keycaps for family and friends because I experiment with different layouts and don't have to move keycaps around each time. My Planck is also great for traveling because it fits on top of my laptop.

I then got an Atreues, which is similar to the Planck but slightly curved for your hands. While I really like that one too, I switched to a Kyria which is a split keyboard. That helps me a lot with movement and shoulders as I can have my trackball in between the keyboard instead of in front of or by the side. My Kyria (which I didn't build myself) has Kailh Pro Light Green switches, which are a bit more clicky than Cherry Brown but now I only work from home and can click away without disturbing anyone. And they're really not that loud.

Jimmy Sjölund

My favorite keyboard

I do miss the feel of the old IBM 3270 beam spring keyboards, but the accompanying 80×25 monitor? Not so much. Nor the whole EBCIDIC thing. And given that the last time I touched one of those was probably 1983 or so, maybe it wasn't that much better...

Here's my favorite and current keyboard:

Image by:

(Chris Hermansen, CC BY-SA 4.0C

This is the Drop Tokyo60 season 4. Mine has Kailh Box Navy switches which need good force and provide good feedback. I borrowed a switch tester from a friend to figure that out.

What I mostly like about this keyboard is the layout. I've used vi for 20 years on keyboards with this layout and that nonsense of putting the shift lock (who uses Shift Lock anyway) where the Control key belongs!

Chris Hermansen

Happy hacking

The writers have spoken! I was able to collect a good sample of what keyboards are most user friendly and which ones are currently loathed. I hope you can use this information to find a keyboard suitable to your needs. Remember that a bunch of keyboards can be formatted to fit your own personal preferences. Happy keyboard hunting!

Whether you are looking for comfort or cool-factor, choosing a keyboard is a deeply personal decision. Here are some of our favorite keyboards.

Image by:

Original photo by Marco Tedaldi. Modified by Rikki Endsley. CC BY-SA 2.0.

Opensource.com community Hardware What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 457 points Austin, Tx-ish

Erik O'Shaughnessy is an opinionated but friendly UNIX system programmer living the good life in Texas. Over the last twenty years (or more!) he has worked for IBM, Sun Microsystems, Oracle, and most recently Intel doing computer system performance related work. He is; a mechanical keyboard aficionado, a gamer, a father, a husband, voracious reader, student of Okinawian karate, and seriously grouchy in the morning before coffee.

| Follow JnyJny Open Minded Python Developer Contributor Club Author 143 points Ipswich, UK

Ruth has been a keen advocate of Open Source for over 18 years.

As a contributor to the Joomla! and Mautic community, she volunteered on the Joomla! Community Leadership Team for over three years, and currently works as Project Lead for Mautic, the world's first open source marketing automation platform, at Acquia.

Ruth is a keen runner, and lives with a condition called Ehlers-Danlos Syndrome which means that she sometimes needs to use a wheelchair or walking aids.

| Follow RCheesley Open Minded Joomla Linux Community Manager Geek Author Contributor Club 948 points Northern Virginia

Will Kelly is a product marketer and writer. His career has been spent writing bylined articles, white papers, marketing collateral, and technical content about the cloud and DevOps. Opensource.com, TechTarget, InfoQ, and others have published his articles about DevOps and the cloud. He lives and works in the Northern Virginia area. Follow him on Twitter:@willkelly.

| Follow willkelly User Attributes Correspondent Open Source Evangelist People's Choice Award Contributor Club Author Correspondent 202 points Kill Devil Hills, NC, USA

A long-time computer hobbyist and technology generalist, Jeremy Stanley has worked as a Unix and GNU/Linux sysadmin for nearly three decades focusing on information security, Internet services, and data center automation. He’s a root administrator for the OpenDev Collaboratory, a maintainer of the Zuul project, and serves on the OpenStack vulnerability management team. Living on a small island in the Atlantic, in his spare time he writes free software, hacks on open hardware projects and embedded platforms, restores old video game systems, and enjoys articles on math theory and cosmology.

Open Minded People's Choice Award Author 47 points

Leigh is a long time open-source enthusiast, started nerding out with Linux kernel 2.0 and dial-up bulletin boards, can often be found hacking on useful code as well as being a product owner and entrepreneur.

Enjoys creating software that is used by people to solve real problems.

Always time for interesting and inspiring projects, feel free to each out - dgtlmoon@gmail.com

| Connect leighmorresi Open Enthusiast Author 80 points Richland, Washington

Gary started out his professional career as a chemist/materials engineer. His start down the path to the Dark Side of Computing began when he wrote a program to design an optimal extruder screw rather than face thousands of calculations with a slide rule (yes, a slide rule.) Since then, he's done a lot of different things in computing: microprocessor cross assemblers and simulators, disk device drivers, communication device drivers, TCP/IP hacking and multi-threaded printer spoolers. Always a glutton for punishment, he wrote his own sendmail.cf from scratch. Around 1993, Gary started doing computer security when the semiconductor company he was working for was forced to get on the Internet to send/receive Integrated Circuit designs faster and a firewall/Internet gateway was needed. Since then, Gary's been involved in firewalls, intrusion detection and analysis, vulnerability assessments, system and application hardening, and anti-spam filters. Gary really does computer security to support his bicycling habit. He has more bikes than most other people have computers. And they're a lot more expensive. Gary says "Bikes are like computers: both can crash, sometimes with disastrous results to the user."

Open Enthusiast Author 5030 points Minnesota

Jim Hall is an open source software advocate and developer, best known for usability testing in GNOME and as the founder + project coordinator of FreeDOS. At work, Jim is CEO of Hallmentum, an IT executive consulting company that provides hands-on IT Leadership training, workshops, and coaching.

| Follow jimfhall | Connect jimfhall User Attributes Correspondent Open Sourcerer People's Choice Award People's Choice Award 2018 Author Correspondent Contributor Club 81 points Oregon

John works for VMware in the Open Source Program Office on upstream open source projects. In a previous life he's worked on the MinnowBoard open source hardware project, led the system administration team on kernel.org, and built desktop clusters before they were cool. For fun he's built multiple star ship bridges, a replica of K-9 from a popular British TV show, done in flight computer vision processing from UAVs, designed and built a pile of his own hardware.

He's cooked delicious meals for friends, and is a connoisseur of campy 'bad' movies. He's a Perl programmer who's been maliciously accused of being a Python developer as well.

| Follow warty9 Open Enthusiast Maker Open hardware Python SysAdmin CentOS Community Manager Developer Fedora Geek DevOps Gamer Linux Author 2510 points Pittsboro, NC

Kevin Sonney is a technology professional, media producer, and podcaster.

A Linux Sysadmin and Open Source advocate, Kevin has over 25 years in the IT industry, with over 15 years in Open Source. He currently works as an SRE at elastic.

Kevin hosts the weekly Productivity Alchemy Podcast. He and his wife, author and illustrator Ursula Vernon, co-host the weekly podcast Kevin and Ursula Eat Cheap (NSFW) and routinely attend sci-fi and comic conventions. Kevin also voiced Rev. Mord on The Hidden Almanac.

Kevin and Ursula live in Pittsboro, NC with a variety of dogs, cats, and chickens.

| Follow ksonney User Attributes Correspondent Open Source Sensei Apache DevOps Cloud Gamer Linux SysAdmin Wordpress Android CentOS Creative Commons Developer Fedora Geek Ubuntu Correspondent Contributor Club Author 182 points Minnesota

After surviving multiple layoff rounds at Digital Equipment Corporation, a large computer company in its day, I started Scott Consulting in 1994. A larger firm bought Scott Consulting in 1999, just as the dot com bust devastated the IT Service industry. A glutton for punishment, I went out on my own again in late 1999 and started Infrasupport Corporation, this time with a laser focus on infrastructure and security. I accepted a job offer with Red Hat, Inc. in 2015 as a Senior Technical Account Manager.

I'm also a published author. Jerry Barkley is an IT contractor, not a superhero. But after he uncovers a cyberattack that could lead to millions dead and nobody believes his warnings, if he doesn't act then who will? Real superheroes are ordinary people who step up when called. "Virus Bomb" and "Bullseye Breach" are available everywhere books are sold. More info at https://www.dgregscott.com/books. Enjoy the fiction. Use the education.

My family and I live near St. Paul, Minnesota.

| Follow dgregscott Open Minded Author Contributor Club 201 points New Jersey

Bob Murphy is a Linux systems administrator, long-time desktop GNU/Linux user and computer enthusiast who is passionate about Free Software and helping people better use technology.

Open Minded Author 238 points Cibolo, TX

I work for Red Hat services as a consultant, I specialize in application deployments and CI/CD on OpenShift, and I run my own OSS project, el-CICD(https://github.com/elcicd), which is a complete CICD COTS solution for the OKD/OpenShift Container Platform. I'm a veteran of more than a few startups, and I've been a software developer/architect, mainly in Java, for almost 30 years now.

Open Minded Author Contributor Club 6931 points Vancouver, Canada

Seldom without a computer of some sort since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005, a full-time Solaris and SunOS user from 1986 through 2005, and UNIX System V user before that.

On the technical side of things, I have spent a great deal of my career as a consultant, doing data analysis and visualization; especially spatial data analysis. I have a substantial amount of related programming experience, using C, awk, Java, Python, PostgreSQL, PostGIS and lately Groovy. I'm looking at Julia with great interest. I have also built a few desktop and web-based applications, primarily in Java and lately in Grails with lots of JavaScript on the front end and PostgreSQL as my database of choice.

Aside from that, I spend a considerable amount of time writing proposals, technical reports and - of course - stuff on https://www.opensource.com.

User Attributes Correspondent Open Sourcerer People's Choice Award 100+ Contributions Club Emerging Contributor Award 2016 Author Comment Gardener Correspondent Columnist Contributor Club 1045 points Borås, Sweden

Jimmy Sjölund is a Principal Agile Practitioner at Red Hat, focusing on organisation transformation and team excellence while exploring agile and lean workflows. He is a visualisation enthusiast and an Open Organization Ambassador.

| Follow jimmysjolund Open Source Champion Author Open Organization Ambassador Contributor Club 62 points Ottawa, Ontario, Canada

Miriam is a technical lead on the WordPress team at Kanopi Studios. She is a full-stack developer, leaning more toward the back end. She loves problem-solving, diving deep into plugin development, and mentoring junior developers.

Miriam is also heavily involved in the WordPress community, speaking and organizing WordCamps and local Ottawa meetups.

Open Enthusiast Contributor Club 16 points Ohio | Follow kellydassing | Connect kellydassing Community Member 16 points Community Member Register or Login to post a comment.

My favorite Git tools

Fri, 11/18/2022 - 16:00
My favorite Git tools Dwayne McDaniel Fri, 11/18/2022 - 03:00

As with any other technology or skill, just reading about Git cannot make you proficient at it or make you an "advanced" user. Now it's time to dig into some of the tools in Git that I've found useful, and hopefully, that will help you use Git.

Git reflog

In my previous article, I wrote about Git history as a chain of commits, and that's a very good model for most purposes. However, Git actually remembers everything you do with Git, not just commits. You can see your entire recent history with git reflog.

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

The log that reflog refers to is found in .git/logs, and it's called HEAD. Opening this file, you can quickly see all the actions taken recently. Inside .git/logs/HEAD, you see rows corresponding to the output of the reflog command.

You can checkout any of the states in a reflog. No matter what you do, Git gives you a way to easily get your files back to a previous state!

To checkout a previous state, use the command:

git checkout HEAD@{<#>}

Replace <#> with the number of steps behind HEAD you want to reference. For instance, if I wanted to check out the state right before I did the last git pull from my example, I would use the command git checkout HEAD@{5} Doing this puts Git into a detached head state, so I would need to make a new branch from there if I wanted to preserve any changes I wanted to make.

By default, your reflog sticks around for at least 30 days before Git cleans up its history. But it does not throw this info away; it packs it into a more compressed form. You don't need to wait for Git to tidy up. You can do it any time with the garbage.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Git gc (garbage collection)

Even though the size of objects and files in your .git folders are tiny and highly compressed, when there are a lot of items present, then Git can start to slow down. After all, looking up entries from a list of 1,000 refs is more time-consuming than a list of only a handful of entries. From time to time, Git performs an internal garbage collection step, packing up all the objects and files not actively in use. It then stuffs them into a highly compressed pack file.

But you don't need to wait for Git to decide to clean up the unused objects. You can trigger this any time you want with the git gc command.

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

Next time you've made hundreds of commits locally, or you just notice that Git commits are taking a little longer than usual, try running git gc. It might speed things up.

Git bisect

The git bisect command is a powerful tool that quickly checks out a commit halfway between a known good state and a known bad state and then asks you to identify the commit as either good or bad. Then it repeats until you find the exact commit where the code in question was first introduced.

Git worktree

Imagine a scenario where you are working in a branch, very deep into adding new dependencies, and are not in any way ready to make a commit. Suddenly, your pager goes off. There's a fire is happening in production, and you need to drop everything, switch to a hotfix branch, and get that patch built quickly.

It's decision time. Do you could cross your fingers, make a commit, and hope you remember where you left off? Do you git stash, which might cause dependency issues and also mean you have to remember what exactly you were doing when you stashed? Or do you just checkout the other branch with a different folder and work as you usually would?

That last option might sound too good to be true, but that is precisely what Git worktree allows you to do.

Normally with Git, you can only have one branch checked out at a time. This makes sense, now that you know that Git tracks the active branch with HEAD, which can only reference one ref at a time.

Git worktree sidesteps this limitation by making copies of branches outside the repository folder. Git knows that this other folder exists and that any commits made there need to be accounted for in the original repo folder. But the copy of the branch also has its own HEAD file keeping track of where Git is pointing in that other location!

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

Git always has at least one worktree open, which you can see by running the command:

git worktree list

This command shows you the current folder where .git is located, the most recent commit ID, and the name of the currently checked out branch at the end of the line.

You can add more entries to the worktree list with the command:

git worktree add /path-to-new-folder/<branch-name>

The add directive creates a new folder at the specified path, named the same as the target branch. Git also sends a linked copy of the repo to that folder, with that branch already checked out. To work in that branch, all you need to do is change the directory then proceed to work as usual.

When you are ready to go back to the original work you were focused on before being interrupted, just change directly back to the original folder. Your work is in the exact same state you left it earlier.

When you are done and want to clean up after yourself, remove any worktree items you want with the command git worktree remove /path-to-new-folder/

A few words of warning for using Git worktree:

  • If a branch is assigned to a worktree, you can not check it out as you normally would. Attempting to checkout a branch that is already checked out throws an error.

  • It's a good idea to remove any unneeded worktree entries as soon as you're finished with them. Errors might occur when running other Git operations while multiple branches are checked out.

  • When working in a code editor like VS Code, changing directory in the terminal doesn't automatically change the open folder in your editor. Remember to open the desired folder through the file menu to ensure you are modifying the correct version of the project files.

So much more to Git

While it might feel like I've covered a lot here, I've actually only scratched the surface of what's possible with Git. It's possible to build entire applications that complement Git, and extending that even further is possible. Fortunately, there are also a lot of resources you can turn to when learning Git.

The one book I would recommend everyone read is absolutely free and you can download it right now. The Pro Git Book covers how Git works in great detail and gives a lot of excellent examples. The one caveat about the book, though, is that it is a little out of date. The free version linked from the git-scm website is from 2014. Still, this book gives you the best foundational knowledge of how Git works and helps make any other Git-related topics more accessible.

There are also cheat sheets and articles out there to help you become a Git expert in no time. But as I said earlier in this article, the only way to learn Git, or any other skill, is to practice, practice, practice.

Git reflog, git gc, git bisect, and git worktree are just a few of the tools I use routinely.

Image by:

Opensource.com

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get verified on Mastodon with your website

Fri, 11/18/2022 - 16:00
Get verified on Mastodon with your website Seth Kenlon Fri, 11/18/2022 - 03:00

If you're migrating away from Twitter, you might be looking for a way to ensure your followers that you are who you say you are. Ignoring debates of how anyone can be sure of anyone's true identity online, it's easy to verify yourself on Mastodon if you already have your own website. This requires a very basic understanding of HTML, so if you don't maintain your own website, then send this article to your web maintainer instead.

1. Get your verification code

Sign in to your Mastodon account and click the edit profile link under your Mastodon handle.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

In the Edit Profile screen that appears, scroll down to the Verification section. This section contains a special verification code. Click the Copy button to copy your verification code to your clipboard.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources 2. Paste your code on your website

To assure people that the same person running your Mastodon account also runs your website, you must paste your verification code into your website.

The verification code you got from your Mastodon profile page is just a hyperlink back to your profile page, around the word "Mastodon." It's arguably the most obvious word to link back to a Mastodon profile, but it's entirely arbitrary. What really matters is that a link to your profile page, with the rel="me" attribute, appears on a page of a website you control. The contents of the link doesn't actually matter.

You can create a page to serve especially for verification, or you can put it into your site's footer, or into a social media icon.

Here's a simple example of a page exclusively serving as a verification point:

>
>
>My website>
>
>
rel="me" href="https://mastodon.example.com/@tux">Mastodon>
>
>

You don't have to create a whole page for your verification, though. You can just paste the verification link into the footer of your site, or somewhere on the index page.

3. Add the verification URL to Mastodon

Once your page is live, copy its web address from your web browser's URL bar. For example, the sample page I created is located at http://myexamplewebsite.tk/index.html.

In your browser, return to the Edit Profile screen of Mastodon. Locate the Profile Metadata section, and type Website into the Label field and then paste the URL of your verification post in the Content field.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Click the Save Changes button and return to your profile page to see your newly verified status.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

On some Mastodon servers, it seems that verification may take an hour or so to resolve, but on two of the three I've tested, the green checkmark appeared immediately after saving.

Verified on Mastodon

A green checkmark on Mastodon indicates proof that the same person controlling your Mastodon account also controls your website. If there are people in your life who know your website and trust that it's yours, then verifying that you've been able to link back to your Mastodon profile is proof that you have control over both platforms. And ultimately, that's as good as identification gets on the Internet.

A blue checkmark might look and feel official, but identification like that is designed to be artificially scarce and yet only cursory. Mastodon acknowledges this, and provides a verification option for every user. With nothing but a website and a Mastodon account, you can self-verify to your followers, demonstrating that the same person (you) posting content online is the same person active on an exciting open source social media.

Three easy steps to a green checkmark on the open source social media platform.

Image by:

Opensource.com

Alternatives What to read next 4 key differences between Twitter and Mastodon This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Git concepts in less than 10 minutes

Thu, 11/17/2022 - 16:00
Git concepts in less than 10 minutes Dwayne McDaniel Thu, 11/17/2022 - 03:00

Git has become the default way to store and transport code in the DevOps generation. Over 93% of developers report that Git is their primary version control system. Almost anyone who has used version control is familiar with git add, git commit, and git push. For most users, that’s all they ever plan to do with Git, and they're comfortable with that. It just works for their needs.

However, from time to time, almost everyone encounters the need to do something a little more advanced, like git rebase or git cherry-pick or work in a detached head state. This is where many devs start to get a bit nervous.

[ Read next How to reset, revert, and return to previous states in Git ]

I'm here to tell you it is ok! Everyone who has or will ever use Git will likely go through those same pangs of panic.

Git is awesome, but it's also intimidating to learn, and it can feel downright confusing sometimes. Git is unlike almost anything else in computer science. You typically learn it piecemeal, specifically in the context of other coding work. Most developers I have met have never formally studied Git beyond perhaps a quick tutorial.

Git is open source, meaning you have the freedom to examine the code and see how it works. It's written mainly in C which, for many devs and people learning computer science, can make it hard to understand. At the same time, the documentation uses terms like massage parameters and commit-ish. It can feel a little baffling. You might feel like Git was written for an advanced Linux professional. That is because it originally was.

A brief Git history

Git started as a specific set of scripts for Linus Torvalds to use to manage patches.

Here's how he introduced what would become Git to the Linux kernel mailing list:

So I'm writing some scripts to try to track things a whole lot faster. Initial indications are that I should be able to do it almost as quickly as I can just apply the patch, but quite frankly, I'm at most half done, and if I hit a snag, maybe that's not true at all. Anyway, the reason I can do it quickly is that my scripts will _not_ be an SCM, they'll be a very specific "log Linus' state" kind of thing. That will make the linear patch merge a lot more time-efficient, and thus possible.

One of the first things I do when I get confused about how Git works is to imagine why and how Linus would apply it to managing patches. It has grown to handle a lot more than that and is indeed a full SCM, but remembering the first use case is helpful in understanding the "why" sometimes.

Git commit

The core conceptual unit of work in Git is the commit. These are snapshots of the files being tracked within your project folder ( where the .git folder lives.)

Image by:

(Git-scm.com, CC BY-NC-SA 3.0)

It's important to remember that Git stores compressed snapshots of the file system, not diffs. Any time you change a file, a whole new compressed version of that file is made and stored in that commit. It does this by creating a super compressed Binary Large Object (blob) out of the file, and then keeping track of it by generating a checksum made with the SHA hashing algorithm. The permanence of your Git history is one of the reasons it's vital never to store or hardcode sensitive data in your Git projects. Anyone who can clone the repo has full access to all the versions of the files.

Git is really efficient. If a file does not change between commits, Git does not make a whole new compressed version for storage. Instead, it just refers back to the previous commit. All commits know what commit came directly before it, called its parents. You can easily see this chain of commits when you run git log.

Image by:

(Git-scm.com, CC BY-NC-SA 3.0)

You have full control over these chains of commits and can do some pretty cool things with them. You can create, delete, merge and reorder them as you see fit. You can even effectively travel through time, explore and even write your commit histories. But it all relies on understanding how Git sees chains of commits, which are generally referred to as branches.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Git branches

Branching lets you work with multiple chains of commits inside a project. Working with multiple branches (especially when you work with rebase) is where many users start to sweat. A common mental model most people have about what branches even are adds to the confusion.

When thinking about branching, most people conjure up images of swim lanes, diverging, and intersecting dots. While those models can be helpful when understanding specific branching strategies and workflows, thinking of Git as a series of numbered dots on a graph can muddy the waters when trying to think about how Git does what it does.

An alternative mental model I find helpful is to think of branches existing in a big spreadsheet. The first column is the parent commit ID, the second is the new commit ID, and then there are columns for metadata, including all the pointers.

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

Pointers keep track of where you are, and which branch is which. Pointers are convenient human-readable references to specific commits. Any reference that leads back to a specific commit is referred to as commit-ish.

The special pointer used to name a branch always points to the newest commit on the chain. You can arbitrarily assign a pointer to any commit with git tag, which doesn't move. When you git checkout or git switch between branches, you're really telling Git that you want to change the point of reference of Git and move a very special pointer called HEAD in every .git folder.

The .git folder

One of the best ways to understand what is going on with Git is to dig into the .git folder. If you've never opened this file before, I highly encourage you to open it up and see what's there. If you're very nervous you might break something, clone an arbitrary open source project to play around with until you feel confident to look into your own project repos.

Image by:

(Dwayne McDaniel, CC BY-SA 4.0)

One of the first things you notice is how small the files are.

Things are measured in terms of bytes or kilobytes, at the largest. Git is extremely efficient!

HEAD

Here in the .git folder, you find the special file HEAD. It's a very small file, only a handful of bytes in size. If you open it up, you see it’s only one line long.

git > HEAD
  ref:refs/heads/main

One of the phrases you will often encounter when reading about Git is "everything is local." From Git's perspective, wherever HEAD is pointing is "here." HEAD is the point of reference for how Git interacts with other branches, other refs, and other copies of itself.

In this example, the ref: is pointing at another pointer, a branch name pointer. Following that path, you can find a file that looks much like the spreadsheet from earlier. Git just takes the latest commit ID from the file and knows that is the commit HEAD is referring to.

If HEAD refers to a specific commit with no other pointer attached, then HEAD is referred to as "detached." Working in a detached HEAD state is completely safe, but limits what you can do, like make new commits from there. To get out of a detached HEAD state, just checkout another pointer, like the branch name, for example, git checkout main.

Config

Another critical file for helping Git keep track of things is the .git/config file. This is just one of the places Git leads and stores configuration. You're likely already familiar with the --global level of Git config, stored in your home directory in your .gitconfig file. There are actually five places Git loads config, each overriding the previous configuration. The order Git loads config is:

  • --system This loads config specific to your operating system

  • --global Affects you as a user, user.name and user.email stored here

  • --local This sets repo specific info, like remotes and hooksPath

  • --worktree The worktree is what is compressed into an individual commit

  • --blob individual compressed files can have their own settings

You can see all config for a repo by running git config --list --show-origin

You can leverage your local config to use multiple Git personas. Override the user.name and user.email in .gitconfig. Leveraging the local config is particularly useful when dividing your time between work projects, personal repos, and any open source contributions.

Git hooks

Git has a built-in powerful automation platform called Git hooks. Git hooks allows you to execute scripts that will run when certain events happen in Git. You can write scripts in any scripting language you prefer that is available to your environment. There are 17 hooks available.

If you look in any repo's .git/hooks folder, you see a collection of .sample files. These are pre-written samples meant to get you started. Some of them contain some odd-looking code. Odd, perhaps, until you remember that these mainly were added to serve the original use case for Linux kernel work and were written by people living in a sea of Perl and Bash scripts. You can make the scripts do anything you want.

Here's an example of a hook I use for personal repos:

#!/sur/bin/env bash
curl https://icanhazdadjoke.com
echo “”

In this example, every time I run git commit but before the commit message is committed to my Git history, Git executes the script. (Thanks to Edward Thomson's git-dad for the inspiration.)

Of course, you can do practical things, too, like checking for hardcoded secrets before making a commit. To read more about Git Hooks and to find many, many example scripts, check out Matthew Hudson's fantastic GitHooks.com site.

Advanced Git

Now you have a better understanding of how Git sees the world and works behind the scenes, and you've seen how you can make it do your bidding with scripts. In my next article, I'll address some advanced tools and commands in Git.

Understanding Git is essential to open source development, but it can be intimidating to learn. Let this tutorial be your first step to getting to know Git.

Image by:

Opensource.com

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux commands: Drop these old utilities for modern alternatives

Thu, 11/17/2022 - 16:00
Linux commands: Drop these old utilities for modern alternatives Seth Kenlon Thu, 11/17/2022 - 03:00

Linux has a good track record for software support. There are about 60 commands in man section 1 of Unix 1st edition, and the majority still work today. Still, progress stops for no one. Thanks to vast global participation in open source, new commands are frequently developed. Sometimes a new command gains popularity, usually because it offers new features, or the same features but with consistent maintenance. Here are ten old commands that have recently been reinvented.

1. Replace man with cheat or tealdeer

The man page is functional, and it works well for what it does. However, man pages aren't always the most succinct at demonstrating how to use the command you're trying to reference. If you're looking for something a little more to the point, try cheat or tealdeer.

2. Replace ifconfig with ip

The ifconfig command provides information about your network interfaces, whether they're physical or virtual.

$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 10.1.2.34  netmask 255.255.255.0  broadcast 10.0.1.255
  inet6 fe80::f452:f8e1:7f05:7514  prefixlen 64
  ether d8:5e:d3:2d:d5:68  txqueuelen 1000  (Ethernet)
  [...]

tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1360
  inet 10.2.3.45  netmask 255.255.254.0  destination 10.2.14.15
  inet6 2620:52:4:1109::100e  prefixlen 64  scopeid 0x0<global>
  unspec 00-00-00-00-00-00-00-00-[...]0-00  txqueuelen 500  (UNSPEC)
  [...]

The newer ip command provides similar information:

$ ip -4 address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.1.2.34/24 brd 10.0.1.255 scope global noprefixroute eth0
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
5: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1360 qdisc pfifo_fast state UNKNOWN group default qlen 500
    inet 10.2.3.45/23 brd 10.2.15.255 scope global noprefixroute tun03. Replace yum with dnf and apt-get with apt

Package managers tend to be slow to change, and when they do they often work hard to maintain backward compatibility. Both the yum command and the apt-get command have had improvements lately. The changes are usually aliased or designed to work in both their old and new syntax:

$ sudo yum install foo
$ sudo dnf install foo $ sudo apt-get install foo
$ sudo apt install foo4. Replace repoquery with dnf

Before there was dnf there were a variety of utilities for yum to help users get reports on their packaging system configuration. Most of those extra functions got included by default with dnf. For instance, repoquery is a subcommand of dnf, and it provides a list of all installed packages:

$ sudo dnf repoquery5. Replace pip with pip

The pip command is a package manager for Python. It hasn't been replaced, but the preferred syntax has been updated. The old command:

$ pip install yamllint

The new syntax:

$ python3 -m pip install yamllint6. Replace ls with exa

The ls command hasn't been replaced.

Rather, it hasn't been replaced again.

The ls command was originally its own binary application, and it's still available as one. Eventually, though, the Bash shell included its own ls built-in command, which by default overrides any installed ls command.

Recently, the exa command has been developed as, depending on your preferences, a better ls. Read about it in Sudeshna Sur's exa command article, and then try it for yourself.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 7. Replace du with dust or ncdu

There's nothing wrong with the du, which reports on how much disk space is used on your hard drives. It does its job well, but to be fair it's pretty minimal.

If you're looking for a little variety, try the ncdu command or the dust command.

8. Replace cat with bat

The cat command is, aside from being overused by the best of us, is a simple and direct command. It reads the contents of any number of files, and outputs it to standard input.

Its output is pretty basic, so if you're looking for something with syntax highlighting and flexible output options, try the bat command instead.

Does bat also replace the tac command? No, don't worry, for now at least tac is safe in its position as the command that outputs a file in reverse. (Unless, that is, you count sed.)

9. Replace netstat with ss

The netstat command has largely been replaced by the ss command, although of all the commands on this list it's possibly the most hotly debated. The ss command provides much of the same functionality, but as Jose Vicente Nunez points out in his six deprecated commands article, there are gaps and differences in functionality. Before switching wholesale to ss, try it and compare it with how you use netstat now.

10. Replace find with fd

I use find to located files, as an input source for GNU Parallel, and more. I'm pretty familiar with it, but I have to admit that its syntax is a little clunky. The fd command seeks to improve upon that. For instance, suppose you're looking for a file called example, but you can't remember what file extension you used. With find, the syntax might look something like this:

$ find . -name "*example*"
/home/tux/example.adoc
/home/tux/example.sh

With fd, the syntax is:

$ fd example
/home/tux/example.adoc
/home/tux/example.sh

And suppose you want to grep command to search through the results for the phrase "zombie apocalypse". Using find:

$ find . -name "*example*" -exec grep "zombie apocalypse" {} \;
zombie apocalypse

Using fd instead:

$ fd txt -x grep zombie
zombie apocalypse

Read more about it in Sudeshna Sur's fd article, and then try it for yourself.

For even more updates to classic commands, download our cheat sheet below.

These traditional Linux utilities have been revitalized with modern replacements.

Image by:

Opensource.com

Linux Download now: Cheat sheet: Old Linux commands and their modern replacements This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How open source powers innovation

Wed, 11/16/2022 - 16:00
How open source powers innovation Gordon Haff Wed, 11/16/2022 - 03:00

Where do people come together to make cutting-edge invention and innovation happen?

The corporate lab

One possible answer is the corporate research lab. More long-term focused than most company product development efforts, corporate labs have a long history, going back to Thomas Edison's Menlo Park laboratory in New Jersey. Perhaps most famous is Bell Labs' invention of the transistor—although software folks may associate it more with Unix and the C programming language.

But corporate laboratories have tended to be more associated with dominant firms that could afford to let large staffs work on very forward-looking and speculative research projects. After all, Bell Labs was born of the AT&T telephone monopoly. Corporate labs also aren't known for playing with their counterparts elsewhere in industry. Even if their focus is long-term, they're looking to profit from their IP eventually, which also means that their research is often rooted in technologies commercially relevant to their business.

A long-term focus is also hard to maintain if a company becomes less dominant or less profitable. It's a common pattern that, over time, research and development in these labs start to look like an extension of the more near-term focused product development process. Historically, the focus of corporate labs has been to develop working artifacts and benchmark themselves against others in the industry by the size of their patent portfolio—although recently there has been more publishing of results than in years past.

The academy

Another engine of innovation is the modern research university. In the US, at least, the university as a research institution primarily emerged in the late 19th century, although some such schools had colonial-era roots, and the university research model truly accelerated after World War II.

Academia is both collaborative and siloed: collaborative in that professors will often collaborate with colleagues around the globe, siloed in that even colleagues at the same institution may not collaborate much if they're not in the same specialty. Although IP may not be protected as vigorously in a university setting as a corporate one, it can still be a consideration. The most prominent research universities make large sums of money from IP licensing.

The primary output of the academy is journal papers. This focus on publication, sometimes favoring quantity over quality, comes with a famous phrase attached: publish or perish. Furthermore, while the content of papers is far more about novel results than commercial potential, that has a flip side: research can end up being quite divorced from real-world concerns and use cases. Among other consequences, this is often not ideal for the students working on that research if they move on to industry after graduation.

Open source software

What of open source software? Certainly, major projects are highly collaborative. Open source software also supports the kind of knowledge diffusion that, throughout history, has enabled the spread of at least incremental advances in everything from viticulture to blast furnace design in 19th-century England. That said, open source software, historically, had a reputation primarily for being good enough and cheaper than proprietary software. That's changed significantly, especially in areas like working with large volumes of data and the whole cloud-native ecosystem. This development probably represents how collaboration has trumped a tendency towards incrementalism in many cases. IP concerns are primarily handled in open source software—occasional patent and license incompatibility issues notwithstanding.

The open source model has been less successful outside the software space. There are exceptions. There have been some wins in data, such as open government datasets and projects like OpenStreetMap—although data associated with many commercial machine learning projects, for example, is a closely guarded secret. The open instruction set architecture specification, RISC-V, is another developing success story. Taking a different approach from earlier open hardware projects, RISC-V seems to be succeeding where past projects did not.

Open source software is most focused on shipping code, of course. However, associated artifacts such as documentation and validated patterns for deploying code through GitOps processes are increasingly recognized as important.

The question

This raises an important question: How do you take what is good about these patterns for creating innovation? Specifically, how do you apply open source principles and practices as appropriate? That's what we've sought to accomplish with Red Hat Research.

Red Hat Research

Work towards what came to be Red Hat Research began in Red Hat's Brno office in the Czech Republic in the early 2010s. In 2018, the research program added a major academic collaboration with Boston University: the Red Hat Collaboratory. The goal was to advance research in emerging technologies in areas of joint interest, such as operating systems and hybrid cloud. The scope of projects that Red Hat and its academic partners collaborate on has since expanded considerably, although infrastructure remains a major focus.

The Collaboratory sponsors research projects led by collaborative teams of BU faculty and Red Hat engineers. It also supports fellowships and internship programs for students and organizes joint talks and workshops.

In addition to activities like those associated with the Collaboratory, Red Hat Research now publishes a quarterly magazine (sign up for your free print or PDF subscription!), runs Red Hat Research Days events, and has regional Research Interest Groups (RIGs) open to a wide range of participants. Red Hat engineers also teach classes and work with students and faculty to produce prototypes, demos, and jointly authored research papers as well as code.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles What sorts of open source research projects?

Red Hat research participates in research projects with universities around the world. These are just a few recent examples from North American partnerships.

  • Machine learning for cloud ops: Continuous Integration/Continuous Development (CI/CD) environments move at a breakneck pace using a wide variety of components. Relying on human experts to manage these processes is unreliable, costly, and often not scalable. The AI for Cloud Ops project, housed at the BU Collaboratory, aims to provide AI-driven analytics and heavily automated "ops" functionality to improve performance, resilience, and security in the cloud without incurring high operation costs. The project's principal investigator, BU professor Ayse Coskun, was interviewed about the project for a recent Red Hat Research Quarterly.
     
  • Rust memory safety: Rust is a relatively new language that aims to be suitable for low-level programming tasks while dealing with the significant lack of memory safety features that a language like C suffers from. The problem: Rust has an "unsafe" keyword that suspends some of the compiler's memory checks within a specified code block. There are often good reasons to use this keyword, but it's now up to the developer to ensure their code is memory safe, which sometimes does not work out so well. Researchers at Columbia University and Red Hat engineers are exploring methods for the automated detection of vulnerabilities in Rust, which can then be used to help automate the development, testing, and debugging of real-world software.
     
  • Linux-based unikernel: A unikernel is a single bootable image consisting of user code linked with additional components that provide kernel-level functionality, such as opening files. The resulting program can boot and run on its own as a single process, in a single address space, at an elevated privilege level without a conventional operating system. This fusion of application and kernel components is very lightweight and can have performance and security advantages. A considerable team of BU and Red Hat engineers has been working on adding unikernel capabilities into the same source code tree as regular Linux.
     
  • Image provenance analysis: It's increasingly easy to create composite or otherwise altered images to spread malicious, untrue information intended to influence behavior. A research collaboration among the University of Notre Dame, Loyola University, and Red Hat is using graph clustering and other techniques to develop scalable mechanisms for detecting these images. Among the project's goals is to see whether there's also the potential to look at metadata or other sources of information. The upstream pyIFD project has come out of this research.
Only the beginning

The above is just a small sample of the many innovative research projects Red Hat Research is involved with. I encourage you to head over to the Red Hat Research projects page to see all the other exciting work going on.

This article is adapted from a post on the Red Hat Research blog and is republished with permission.

If you're looking for the next big thing in computing technology, start the search in open source communities.

Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to address challenges with community metrics

Wed, 11/16/2022 - 16:00
How to address challenges with community metrics Georg Link Wed, 11/16/2022 - 03:00

The previous two articles in this series looked at open source community health and the metrics used to understand it. They showed examples of how open source communities have measured at their health through metrics. This final article brings those ideas together, discussing the challenges of implementing community health metrics for your own community.

Organizational challenges

First, you must decide which metrics you want to examine. This requires understanding your questions about reaching your goals as a community. The metrics relevant to you are those that can answer your questions. Otherwise, you risk being overwhelmed by the amount of data available.

Second, you need to anticipate how you want to react to the metrics. This is about making decisions based on what your data shows you. For example, this includes managing engagement with other community members, as discussed in previous articles.

Third, you must differentiate between good and bad results in your metrics. A common pitfall is to compare your community to other communities, but the truth is that every community works and behaves differently. You can't necessarily even compare metrics within the same project. For example, you may be unable to compare the number of commits in repositories within the same project because one may be squashing commits while the other might have hundreds of micro commits. You can establish a baseline of where you are and have been and then see whether you've improved over time.

Privacy

The final organizational challenge I want to discuss is Personally Identifiable Information (PII) concerns. One of open source's core values and strengths is the transparency of how contributors work. This means everyone has information about who's engaged, including their name, email address, and possibly other information. There are ethical considerations about using that data.

In recent years, regulations like the European General Data Protection Regulation (GDPR) have defined legal requirements for what you can and cannot do with PII data. The key question is whether you need to ask everyone's permission to process their data. This is an opt-in strategy. On the other hand, you might choose to use the data and provide an opt-out process.

This distinction is important. For instance, suppose you're providing metrics and dashboards as a service to your community. In an effort to improve the community, you might make the case that the (already publicly available) information has greater value for the community once it's processed. Either way, make it clear what data you use and how you use it.

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles Technical challenges

Where is your community data being collected? To answer this, consider all the places and platforms your community is engaging in. This includes the software repository, whether it's GitLab, GitHub, Bitbucket, Codeberg, or just a mailing list and a Git server. It may also include issue trackers, a change request workflow system like Gerrit, or a wiki.

But don't stop at the software development interactions. Where else does the community exist? These could be forums, mailing lists, instant messaging channels, question-and-answer sites, or meetups. There's a lot of activity in open source communities that doesn't strictly involve software development work but that you want to recognize in your metrics. These non-coding activities may be hard to track automatically, but you should pay special attention to them or risk ignoring important community members.

With all of these considerations addressed, it's time to take action.

1. Retrieve the data

Once you've identified the data sources, you must get the data and make it useful. Collecting raw data is almost always the easiest step. You have logs and APIs for that. Once set up, the (hopefully occasional) main challenge is when APIs and log formats change.

2. Data enrichment

Once you have the data, you probably need to enrich it.

First, you must unify the data. This step includes converting data into a standard format, which is no small feat. Just think of all the different ways to express a simple date. The order of the year, month, and day varies between regions; dates may use dots, slashes, or other symbols, or they can be expressed in the Unix epoch. And that's just a timestamp!

Whatever your raw data format is, make it consistent for analysis. You also want to determine the level of detail. For example, when you look at a Git log, you may only be interested in when a commit was made and by whom, which is high-level information. Then again, maybe you also want to know what files were touched or how many lines were added and removed. That's a detailed view.

You may also want to track metadata about different contributions. This may involve adding contextual information on how the data was collected or the circumstances under which it was created. For example, you could tag contributions made during the Hacktoberfest event.

Finally, standardize the data into a format suitable for analysis and visualization.

When you care about who is active in your community (and possibly what organizations they work for), you must pay special attention to identity. This can be a challenge because contributors may use different usernames and email addresses across the various platforms. You need a mechanism to track an individual by several online identifiers, such as an issue tracker, mailing list, and chat.

You can also pre-process data and calculate metrics during the data enrichment phase. For example, the original raw data may have a timestamp of when an issue was opened and closed, but you really want to know the number of days the issue has been open. You may also have categorization criteria for contributions, such as identifying which contribution came from a core contributor, who's been doing a lot in a project, how many "fly by" contributors show up and then leave, and so on. Doing these calculations during the enrichment phase makes it easier to visualize and analyze the data and requires less overhead at later stages.

3. Make data useful

Now that your data is ready, you must decide how to make it useful. This involves figuring out who the user of the information is and what they want to do with it. This helps determine how to present and visualize the data. One thing to remember is that the data may be interesting but not impactful by itself. The best way to use the data is to make it part of a story about your community.

You can use the data in two ways to tell your community story:

  • Have a story in mind, and then verify that the data supports how you perceive the community. You can use the data as evidence to corroborate the story. Of course, you should look for evidence that your story is incorrect and try to refute it, similar to how you make a scientific hypothesis.
  • Use data to find anomalies and interesting developments you wouldn't have otherwise observed. The results can help you construct a data-driven story about the community by providing a new perspective that perhaps has outgrown casual observation.
Solve problems with open source

Before you address the technical challenges, I want to give you the good news that you're in open source technology, and others have already solved many of the challenges you're facing. There are several open source solutions available to you:

  • CHAOSS GrimoireLab: The industry standard and enterprise-ready solution for community health analytics.
  • CHAOSS Augur: A research project with a well-defined data model and bleeding-edge functionality for community health analytics.
  • Apache Kibble: The Apache Software Foundations' solution for community health analytics.
  • CNCF Dev Analytics: CNCF's GitHub statistics for community health analytics.

To overcome organizational challenges, rely on the CHAOSS Project, a community of practice around community health.

The important thing to remember is that you and your community aren't alone. You're a part of a larger community that's constantly growing.

I've covered a lot in the past three articles. Here's what I hope you take away:

  • Use metrics to identify where your community needs help.
  • Track whether specific actions lead to changes.
  • Track metrics early, and establish a baseline.
  • Gather the easy metrics first, and get more sophisticated later.
  • Present metrics in context. Tell a story about your community.
  • Be transparent with your community about metrics. Provide a public dashboard and publish reports.

Consider this advice for addressing the organizational and technical challenges of implementing community health metrics for your own community.

Community management Security and privacy What to read next 5 metrics to track in your open source community What do you do with community metrics? This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 125 points Madrid, Spain

Emilio has 5+ years of experience in business and marketing across different industries and countries. His academic background includes a Bachelors degree in Business from Americana University and a Masters degree in Marketing and Sales from the EAE Business School.

Emilio is currently the Marketing Specialist at Bitergia in Madrid, Spain. He creates content and writes about open source community, metrics, and analytics.

Outside of work, his hobbies include traveling, playing video games, and boxing.

LinkedIn profile: https://www.linkedin.com/in/emilio-galeano-gryciuk/

Open Minded Author Contributor Club Register or Login to post a comment.

My favorite tricks for navigating the Linux terminal faster

Tue, 11/15/2022 - 16:00
My favorite tricks for navigating the Linux terminal faster Anamika Tue, 11/15/2022 - 03:00

One of the advantages of working in a terminal is that it's faster than most other interfaces. Thanks to the GNU Readline library and the built-in syntax of shells like Bash and Zsh, there are several ways to make your interactions with the command line even faster. Here are five ways to make the most of your time in the terminal.

1. Navigate without the arrow keys

While executing commands on the command line, sometimes you miss a part at the beginning or forget to add certain tags or arguments toward the end. It's common for users to use the Left and Right arrow keys on the keyboard to move through a command to make edits.

There's a better way to get around the command line. You can move the cursor to the beginning of the line with CTRL+A. Similarly, use CTRL+E to move the cursor to the end of the line. Alt+F moves one word forward, and Alt+B moves one word back.

Shortcuts:

  • Instead of Left arrow, left, left, left, use CTRL+A to go to the start of the line or Alt+B to move back one word.
  • Instead of Right arrow, right, right, right, use CTRL+E to move to the end of the line, or Alt+F to move forward a word.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 2. Don't use the backspace or delete keys

It's not uncommon to misspell commands. You might be used to using the Backspace key on the keyboard to delete characters in the backward direction and the Delete button to delete them in the forward direction. You can also do this task more efficiently and easily with some helpful keyboard shortcuts.

Instead of deleting commands character by character, you can delete everything from the current cursor position to the beginning of the line or the end.

Use CTRL+U to erase everything from the current cursor position to the beginning of the line. Similarly, CTRL+K erases everything from the current cursor position to the end of the line.

Shortcuts:

  • Instead of Backspace, use CTRL+U.
  • Instead of Delete, use CTRL+K.

3. Execute multiple commands in a single line

Sometimes it's convenient to execute multiple commands in one go, letting a series of commands run while you step away from your computer or turn your attention to something else.

For example, I love contributing to open source, which means working with Git repositories. I find myself running these three commands frequently:

$ git add $ git commit -m "message" $ git push origin main

Instead of running these commands in three different lines, I use a semi-colon (;) to concatenate them onto a single line and then execute them in sequence.

Shortcuts:

  • Instead of:
    $ git add . $ git commit -m "message" $ git push origin main

    Use:

    $ git add .;git commit -m "message";git push origin main
  • Use the ; symbol to concatenate and execute any number of commands in a single line. To stop the sequence of commands when one fails, use && instead:
    $ git add . && git commit -m "message" && git push origin main
4. Alias frequently used commands

You probably run some commands often. Sometimes, these may be lengthy commands or a combination of different commands with the same arguments.

To save myself from retyping these types of commands, I create an alias for the commands I use most frequently. For example, I often contribute to projects stored in a Git repository. Since I use the git push origin main command numerous times daily, I created an alias for it.

To create an alias, open your .bashrc file in your favorite editor and add an alias:

alias gpom= "git push origin main"

Try creating an alias for anything you run regularly.

Note: The .bashrc file is for users using the Bash shell. If your system runs a different shell, you probably need to adjust the configuration file you use and possibly the syntax of the alias command. You can check the name of the default shell in your system with the echo $SHELL command.

After creating the alias, reload your configuration:

$ . ~/.bashrc

And then try your new command:

$ gpom

Shortcut:

Instead of typing the original command, such as:

$ git push origin main

Create an alias with the alias declaration in .bashrc or your shell's configuration file.

5. Search and run a previous command without using the arrow keys

Most terminal users tend to reuse previously executed commands. You might have learned to use the Up arrow button on your keyboard to navigate your shell's history. But when the command you want to reuse is several lines in the past, you must press the Up arrow repeatedly until you find the command you are looking for.

Typically the situation goes like this: Up arrow, up, up, up. Oh, I found it! Enter.

There is an easier way: You can search your history one step at a time using the history command.

When you use the history command, the list of commands appears with a number beside each. These numbers are known as the history-number of the command. You can type !{history-number} on your terminal to run the command of the corresponding number.

Shortcuts:

  • Instead of Up arrow, up, up, up, Enter, type history, and then look for the history-number of the command you want to run:
    $ !{history-number}
  • You can also perform this task a different way: Instead of: Up arrow, up, up, up, Enter, use CTRL+R and type the first few letters of the command you want to repeat.
Command-line shortcuts

Shortcut keys provide an easier and quicker method of navigating and executing commands in the shell. Knowing the little tips and tricks can make your day less hectic and speed up your work on the command line.

This article originally appeared on Red Hat Enable Sysadmin and has been republished with the author's permission.

Make your day less hectic and speed up your work on the Linux command line with these tips.

Image by:

iradaturrahmat via Pixabay, CC0

Linux Sysadmin Command line What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What do you do with community metrics?

Tue, 11/15/2022 - 16:00
What do you do with community metrics? Georg Link Tue, 11/15/2022 - 03:00

In my previous article, I provided an overview of possible community health metrics. I look at what you can do with those metrics in this article. You'll see several examples from different communities, some of which you may be familiar with.

Contribution metric

I'll start with the "new contributors and contributions" metric, which measures developers joining and leaving a community. I can measure this by seeing which developers made a commit during a specific period. Someone who shows up for the first time joined. Someone who hasn't contributed for a while has probably left.

It is natural for developers to leave a project. Maybe they change jobs, have a change in priorities, or have personal reasons for reducing their open source engagement. It is important for the health of an open source project to attract new developers to continue the work.

Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

The first example is taken from a 2020 community report looking at the Mautic community over the previous five years.

When considering the Mautic product repository, you can see that there has been a steady rate of new developers contributing to Mautic this quarter. New developers are defined here as having made their first commit which was merged in the given period.

Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

I added lines to show the community attracting new contributors at various rates over different periods. This reflects the different stages of the community over these five years and the strategic decisions made.

A single metric by itself, however, is not fully descriptive of community health. The Mautic Community Report continued to also look at the overall contributions made.

Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

The overall contributions were elevated during the period of increased attraction of new contributors, shown here in the dashed circle. After a downward spiral, the community became more active towards the end of this analysis period.

More on data science What is data science? What is Python? How to become a data scientist Data scientist: A day in the life Use JupyterLab in the Red Hat OpenShift Data Science sandbox Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint MariaDB and MySQL cheat sheet Latest data science articles Draw conclusions

Q1 2020 was a busy one in the Mautic community. Many positive steps were taken to establish a solid foundation for growth. Teams became more proactive and processes were established that helped the community function more effectively. You see a significant increase in engagement and new contributors as a result.

Organizational diversity metric

Now, look at the metric "Organizational Diversity." Similar to how a project is healthy when many different contributors are working on it, it is good to have several organizations involved, too.

To make this case, think of a community where a single organization employs all contributors. If that organization decides to reduce its project efforts and reassign the contributors, the project would be in jeopardy. This is why communities are interested in the organizational diversity of their contributors.

Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

One entity that regularly reports on its organizational diversity is the Drupal community. In fact, the Drupal community has a sophisticated credit system for tracking contributions and associating them with organizations. Contributors can declare whether they have contributed as a volunteer, part of their employment, or in client work for a specific customer.

In the 12 months between July 1, 2020, and June 30, 2021, Drupal.org's credit system recorded contributions from 7,420 different individuals and 1,186 different organizations. It saw a 10% decline in individual contributors but only a 2% decrease in organizational contributors.

This is the time of the COVID pandemic. While the economic situation for many wasn't good, the fact that most organizations continued to support Drupal tells us they would again increase their contributions when the economy recovered. This data was a sign of community health.

Who is making contributions? Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

I'll look at other Contributions by Organizations. In this graph from the kata containers community, I excluded two founding member organizations. Their declared goal with open sourcing kata containers was to build a healthy and vibrant community supported by many organizations. This graph shows the number of commits done by these non-founding members, and you see a steady increase over five years. This shows the success of engaging more companies to make contributions. As more organizations get involved, the graph also becomes more colorful.

Contributor experience metric

So far, I discussed metrics that showed information about who is involved in an open source project and how engaged they were. The level of engagement is important because a community would be dead without it.

Another area of community health is the experience that community members have when contributing. Imagine trying to contribute to a community, opening an issue, and no one responds—how would you feel? Would you go on to create a change request? What if you already made a change request, and no one commented on it or reviewed it? Wouldn't you prefer to at least hear something, even if it was, "Thank you, but this is out of scope, and we won't merge it"?

Consider change requests metrics: Change requests are intended to be reviewed by other developers, who may suggest improvements. In the CHAOSS project, members decided to use the vendor-neutral term "change request" because GitHub, GitLab, and Gerrit all called it something different.

  • Pull Request (GitHub)
  • Merge Request (GitLab)
  • Changeset (Gerrit)
Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

Image by:

(Georg Link and Emilio Galeano Gryciuk, CC BY-SA 4.0)

For example, the Starling X community has a 3.97-day Median Time to Merge (another name for Change Request Duration). The metric itself sounds good. When you contribute, the work is done within four days.

I want to put this metric into more context. You see that the overall contributions dropped off during the pandemic. However, when you look at the review efficiency index, you see that the responsiveness was good. The experience of contributors was good, albeit at a lower level during the pandemic, and they responded to each other well, even during stressful situations.

The answer is sometimes the question

Measurable data causes some people to fool themselves into thinking it's a puzzle that needs to fit together perfectly into a complete picture. But often, reviewing the data, questioning the results of community programs, inventing new solutions to problems you detect, and asking questions that drive you toward a healthier community is exactly what you should be doing with your data.

The data about your community will never stop changing, and that's to be expected. It is important to gather, look at, and take action on data when you identify weaknesses, biases, or things you've neglected.

You can build a healthier community, and a healthier community is empowered to make better software.

The data about your community will never stop changing, and that's to be expected. It's important to keep asking questions, iterate, and take action.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Community management Data Science What to read next 5 metrics to track in your open source community This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 95 points Madrid, Spain

Emilio has 5+ years of experience in business and marketing across different industries and countries. His academic background includes a Bachelors degree in Business from Americana University and a Masters degree in Marketing and Sales from the EAE Business School.

Emilio is currently the Marketing Specialist at Bitergia in Madrid, Spain. He creates content and writes about open source community, metrics, and analytics.

Outside of work, his hobbies include traveling, playing video games, and boxing.

LinkedIn profile: https://www.linkedin.com/in/emilio-galeano-gryciuk/

Open Enthusiast Author Contributor Club Register or Login to post a comment.

5 metrics to track in your open source community

Mon, 11/14/2022 - 16:00
5 metrics to track in your open source community Georg Link Mon, 11/14/2022 - 03:00

The core of open source is about using, sharing, and collaborating in software creation. With its roots in the free software movement and ensuring the rights of software users, open source has evolved from being solely the work of volunteers and hobbyists to also include the enterprise.

Collaborative software development has taken on a new dimension in the last 5 to 10 years. Today open source makes up 58% of software in the enterprise. In fact, 63% of companies in a 2021 survey indicated wanting to increase their use and engagement with open source.

Open source is everywhere and forms the digital infrastructure we rely on. The US issued a directive mandating more software supply chain security. The European Union is also working on similar legislation and guidelines. To address this challenge, you must understand how open source software is built. The process typically involves an open source project.

Selecting the right metric for your open source project

This article analyzes open source projects built by a community. While there are open source projects with only one maintainer or fully controlled by a company, those are excluded to focus on projects with a community.

The specific focus is on the challenges you may face and how to overcome them.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Community health check

Discovering the health of an open source project and making decisions has been a huge challenge. Our company, Bitergia, has a history of working on this issue for more than 15 years.

The authors are maintainers of the open source GrimoireLab metrics tools, and we're Official Metrics Partners of foundations like OpenInfra and NumFocus. When the interest grew in community health, we co-founded the CHAOSS project in 2017 as a cooperation with the Linux Foundation in collaboration between industry, academia, and open source.

Today, the CHAOSS community has defined more than 70 metrics and maintains software to get the insights you need. This article is a deep dive into these metrics to understand how they are measured.

CHAOSS metrics are divided into five working groups, each with focus areas.

1. Common metrics
  • Goal: Understand what contributions organizations and people are making.
  • Focus areas: Contributions, time, people, place, and more.
  • One example metric: Type of contributions: Measure the types of contributions made.
2. Value metrics
  • Goal: Identify the degree to which a project is valuable to researchers and academic institutions.
  • Focus areas: Academic value, communal value, individual value, and organizational value.
  • Example metric: Project velocity: What is the development speed for an organization?
3. Evolution metrics
  • Goal: Aspects related to how the source code changes over time and the project's mechanisms to perform and control those changes.
  • Focus areas: Code development activity, code development efficiency, code development process quality, issue resolution, and community growth.
  • Example metric: New contributors: How many contributors are making their first contribution to a given project, and who are they?
4. Diversity, equity, and inclusion metrics
  • Goal: Identify diversity, equity, and inclusion aspects of communities.
  • Focus areas: Event diversity, governance, and leadership in the project and community.
  • Example metrics: Time inclusion for virtual events where the organizers are mindful of attendees and speakers in other time zones.
5. Risk metrics
  • Goal: Understand how active a community is in supporting a given software package.
  • Focus areas: Business risk, code quality, dependency risk assessment, licensing, and security.
  • Example metric: Elephant factor: What is the distribution of work in the community?
Build open communities

The Mozilla Foundation released a report in 2019 on the different types of open source projects, showing that each is created for a different reason, has different governance, chooses different licenses, and engages users and other developers to various degrees.

Your community is important, so checking in on it is essential. But what do you do once you've defined what to measure? How do you get the metrics, and once you have them, what do you do with them? The following article discusses the next steps.

The CHAOSS community has defined more than 70 metrics and maintains software to get the insights you need. Here are the five focus areas you should pay attention to.

Image by:

Opensource.com

Community management Data Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 65 points Madrid, Spain

Emilio has 5+ years of experience in business and marketing across different industries and countries. His academic background includes a Bachelors degree in Business from Americana University and a Masters degree in Marketing and Sales from the EAE Business School.

Emilio is currently the Marketing Specialist at Bitergia in Madrid, Spain. He creates content and writes about open source community, metrics, and analytics.

Outside of work, his hobbies include traveling, playing video games, and boxing.

LinkedIn profile: https://www.linkedin.com/in/emilio-galeano-gryciuk/

Open Enthusiast Author Register or Login to post a comment.

Make swap better with zram on Linux

Mon, 11/14/2022 - 16:00
Make swap better with zram on Linux David Both Mon, 11/14/2022 - 03:00

Zram is a compressed RAM disk on Linux. Lately, it's been put to use for swap space on many distributions. In my previous article, I introduced zram and demonstrated how to use it. In this article, I cover some of the ways you can customize how your system puts zram to use.

Augmented swap

Zram swap can be augmented with standard secondary storage devices. Adding some traditional swap space can be especially useful on systems with low amounts of RAM. (Such augmentation isn't normally useful on hosts with very large amounts of RAM.)

If you do choose to augment swap with some type of storage device, hard drives do still work but are significantly slower than using SSD in either SATA or m.2 formats. However, the flash memory in SSD devices has a more limited life than HDD devices, so systems with large amounts of swap activity can significantly reduce the lifespan of an SSD.

Tuning swap

There's more to tuning swap space than simply allocating a specific amount of swap space. There are other factors that can be used to manage how swap space is used and managed by the system. Swappiness is the primary kernel parameter that can be used to manage swap performance.

I recently wrote How I troubleshoot swappiness and startup time on Linux, in which I discussed the vm.swappiness kernel setting. The short version is that the default setting for how aggressively the Linux kernel forces swapping to begin and to function is 60. Zero (0) is the least aggressive and 100 (or 200, depending on what you read) is the most aggressive. At that level, I was experiencing delays when working with very large documents in LibreOffice despite having 64 GB of RAM in my primary workstation, much of which was unused.

The resolution to this problem is to reduce the vm.swappiness kernel parameter to 13. This works well for my use cases, but you may need to experiment to get it right for your environment. Read the linked article for the details.

You can also find more general information about tuning the Linux kernel in my article How to tune the Linux kernel with the /proc filesystem.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Swap size recommendations

At this time, I have found no recommendations from any distribution for swap size when using zram. Based on my personal experience, I have found that having at least a little swap space can be beneficial. Even when you have large amounts of RAM, just the fact that properly tuned swap for your environment is being used and contains data can indicate that more RAM is needed.

The default zram swap size, listed in my previous article, is more than sufficient for that purpose. It has worked well so far on all of my Linux hosts.

In my opinion, the ultimate purpose of swap space is to be a small buffer — a red flag – that lets the system administrator know when more RAM is needed in a system. Of course, some very old hardware cannot support more than 4 or 8 GB of RAM. In such a case, a new motherboard is needed – one which will support enough RAM to perform the task at hand.

My recommendation is to do as I have. I set up zram swap of the default size on every one of my hosts. I removed all existing swap partitions (I don't use swap files), and my hosts have all been running perfectly with that swap setup.

Removing traditional swap partitions and files

Because I've just recommended removing all of the old swap partitions, I should also mention that the process to do so is not as straightforward as it should be. It's not hard, but it took me some research to figure it out because there's a lot of old and incorrect information out there on the Internet. This procedure works for me on Fedora 36.

  1. Turn off swap for the existing swap partitions and files.

    This can be done using swapoff /dev/nameofswapdevice but it might be easiest to just turn off all swap with the swapoff -a command. This command also turns off any existing zram swap.

  2. Remove the entries for traditional swap partitions or files in the /etc/fstab file. I just commented these out, in case of unexpected problems. I deleted those entries later. Zram swap does not require an entry in the /etc/fstab file.

  3. The default /etc/default/grub configuration file is simple, and you only need concern yourself with one line:

    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_DEFAULT=saved
    GRUB_DISABLE_SUBMENU=true
    GRUB_TERMINAL_OUTPUT="console"
    GRUB_CMDLINE_LINUX="resume=/dev/mapper/vg01-swap \
    rd.lvm.lv=vg01/root rd.lvm.lv=vg01/swap \
    rd.lvm.lv=vg01/usr rhgb quiet"
    GRUB_DISABLE_RECOVERY="true"
    GRUB_ENABLE_BLSCFG=true

    Change the GRUB_CMDLINE_LINUX line to:

    GRUB_CMDLINE_LINUX="rd.lvm.lv=vg01/root rd.lvm.lv=vg01/usr"

    Removing resume=/dev/mapper/vg01-swap and rd.lvm.lv=vg01/swap prevents the kernel from looking for the swap volume.

  4. You must save these changes with grub2-mkconfig. First, make a backup of the current /boot/grub2/grub.cfg file, and then run the following command:

# grub2-mkconfig > /boot/grub2/grub.cfgTroubleshooting zram

It's important to follow all of these steps. Removing rhgb quiet causes all kernel boot messages and systemd startup messages to be displayed. This can make it easier to quickly locate problems during the boot and startup phases.

When I first tried this, I removed the logical volume I'd designated as swap space, and then rebooted to test. The reboot failed and hung early in the boot process.

Fortunately, I've set the kernel so that it displays boot and startup messages rather than the graphical boot. As a result, I was able to see the error message indicating that the kernel couldn't find the swap volume.

If this happens to you because you've missed a step, then boot from a Live Fedora USB drive, and create a new swap volume. It's not necessary to do anything else. Then reboot and remove the swap entries in kernel line of /etc/defaults/grub.

Using only zram swap

After removing swap partitions and regenerating the GRUB config, I ran swapon -a and verified with swapon -show and lsblk. Rebooting the system gave me a final assurance that the system did boot properly and that the only swap is the zram swap.

Zram is a tool that's meant for creating compressed virtual swap space. The ideal swap configuration depends on your use case and the amount of physical RAM in your host computer. No matter what combination of zram, swap partitions, and swap files you use for swap, you should always experiment with your own system loads, and verify that your swap configuration works for you. However, using the default zram swap without any traditional swap partitions or files works as well for me — as well as any other swap configuration I've ever used, and better than many.

The ideal swap configuration depends on your use case and the amount of physical RAM in your host computer.

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use a hashmap in Java

Sun, 11/13/2022 - 16:00
Use a hashmap in Java Seth Kenlon Sun, 11/13/2022 - 03:00

In the Java programming language, a hashmap is a list of associated values. Java uses hashmaps to store data. If you keep a lot of structured data, it's useful to know the various retrieval methods available.

More on Java What is enterprise Java programming? An open source alternative to Oracle JDK Java cheat sheet Red Hat build of OpenJDK Free online course: Developing cloud-native applications with microservices Fresh Java articles Create a hashmap

Create a hashmap in Java by importing and using the HashMap class. When you create a hashmap, you must define what constitutes a valid value (such as Integer or String). A hashmap holds pairs of values.

package com.opensource.example;

import java.util.HashMap;

public class Mapping {
    public static void main(String[] args) {
        HashMap<String, String> myMap = new HashMap<>();
        myMap.put("foo", "hello");
        myMap.put("bar", "world");

        System.out.println(myMap.get("foo") + " " + myMap.get("bar"));
        System.out.println(myMap.get("hello") + " " + myMap.get("world"));
    }
}

The put method allows you to add data to your hashmap, and the get method retrieves data from the hashmap.

Run the code to see the output. Assuming you've saved the code in a file called main.java, you can run it directly with the java command:

$ java ./main.java
hello world
null null

Calling the second values in the hashmap returns null, so the first value is essentially a key and the second a value. This is known as a dictionary or associative array in some languages.

You can mix and match the types of data you put into a hashmap as long as you tell Java what to expect when creating it:

package com.opensource.example;

import java.util.HashMap;

public class Mapping {
    public static void main(String[] args) {
        HashMap<Integer, String> myMap = new HashMap<>();
        myMap.put(71, "zombie");
        myMap.put(2066, "apocalypse");

        System.out.println(myMap.get(71) + " " + myMap.get(2066));
    }
}

Run the code:

$ java ./main.java
zombie apocalypseIterate over a hashmap with forEach

There are many ways to retrieve all data pairs in a hashmap, but the most direct method is a forEach loop:

package com.opensource.example;

import java.util.HashMap;

public class Mapping {

    public static void main(String[] args) {
        HashMap<String, String> myMap = new HashMap<>();
        myMap.put("foo", "hello");
        myMap.put("bar", "world");

        // retrieval
        myMap.forEach( (key, value) ->
                System.out.println(key + ": " + value));
    }
}

Run the code:

$ java ./main.java
bar: world
foo: helloStructured data

Sometimes it makes sense to use a Java hashmap so you don't have to keep track of dozens of individual variables. Once you understand how to structure and retrieve data in a language, you're empowered to generate complex data in an organized and convenient way. A hashmap isn't the only data structure you'll ever need in Java, but it's a great one for related data.

A hashmap is a useful way to store structured data in Java.

Image by:

Photo by Nathan Dumlao, Unsplash

Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Learn Python: 7 of my favorite resources

Sat, 11/12/2022 - 16:00
Learn Python: 7 of my favorite resources Don Watkins Sat, 11/12/2022 - 03:00

I made a decision recently that I wanted to learn more Python so I could improve my instructional skills and broaden the horizons of my students. In the process, I have discovered these excellent resources that have me learning new code and improving my understanding of Python in general.

1. Teach your kids to code

I began the Python journey about seven years ago when I discovered connections between Apple LOGO and the Turtle module in Python. The Linux computer I was using at the time defaulted to Python 2.7, and I soon discovered that I wanted to use Python 3. I managed to get it installed and began writing some simple programs using the Turtle module. After reading Dr. Bryson Payne’s Teach Your Kids to Code, I realized there was a lot more to Python than just Turtle. That’s when I installed IDLE.

2. IDLE

Working with IDLE, the interactive interface improved my experience and made me confident enough to consider teaching Python to students. I volunteered to help a group of home-schooled children in my community and soon found myself teaching a class of sixteen! I’m glad their parents stayed and agreed to be my assistants, otherwise I think I’d have been overwhelmed. The experience whetted my appetite to learn more so I could teach more.

3. Mu Editor

The following spring in 2018 I attended PyConUS. I listened to a talk by Nicholas Tollervey, a middle school teacher, who had written a Python development environment for school-age children. The Mu editor has a linter built into it, which helped me to see where my errors in programming were. Mu helped me improve my coding skills, and I was able to share that with students, who benefitted as well.

As my confidence and experience grew, I became eager to share the Python journey with still more students. I co-wrote a grant the following year to teach a class that used Raspberry Pi 4 computers and Python. The pandemic interrupted that experience. In the interim, the Raspberry Pi Foundation released the Pi 400. In the spring of 2021, I used the materials I had developed the previous year and a generous grant from a local library to teach two groups of students how to program. That event was so successful that it was repeated this year.

4. Codium

Several years ago, I learned that Microsoft’s Visual Studio Code is an open source code editor that can be used on Linux. One of the aspects of my Python learning journey that had eluded me until recently was how to set up and use a virtual environment for Python programming, which had been suggested when using VS Code. My questions were answered here on Opensource.com in an article about venv, and that opened the door to learning how to set up and configure Python virtual environments on my Linux computer. Around the same time, I found Codium, a community project built around VS Code.

Now I want to share the VS Codium experience with my students and open their understanding of Python beyond the Turtle module. This zest for learning has me looking for training resources that are open source and freely available on the internet.

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles 5. Python gently explained

The book Automate the Boring Stuff with Python by Al Sweigart has long been a favorite of mine. Now, the author has released Python Programming Exercises, Gently Explained. Both books are available online for free and are openly licensed with Creative Commons license.

6. Python for everyone

Dr. Charles Severance released Python for Everyone in 2017, which I highly recommend. He provides "bite size" lessons for aspiring programmers like me. The code for the course is available on GitHub, so you can download it and install it on your own computer or school network.

7. Python videos

Recently I learned that Opensource.com alumnus Jay LaCroix has an excellent series of twenty-eight videos available for free on YouTube that begin with Python basics and span the gamut of a solid introduction to Python programming. Best of all, he uses a Linux computer, so his lessons are especially appropriate for a Linux programming environment. One of the takeaways from these videos is learning to use nano as a programming environment, and it’s included by default in most Linux distributions.

Your learning path

These seven resources have helped me grow as a programmer, and it’s all open source and available to share with others. How have you been honing your programming skills? What would you like to share? Let us know in the comments.

Over the years, I've honed my Python skills thanks to these open source resources.

Image by:

Yuko Honda on Flickr. CC BY-SA 2.0

Python Programming Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to switch from Twitter to Mastodon

Fri, 11/11/2022 - 16:00
How to switch from Twitter to Mastodon Jessica Cherry Fri, 11/11/2022 - 03:00

Like many people, I find social media somewhat exciting and also...a bit much. Sometimes you get deep-fried in algorithms, tracking data, and ads catered especially for you. You lack administrative control over what you want to see, especially on the old platforms many of us are used to. As usual, you must look to open source to fix the problem. And that's exactly what Mastodon, an open source microblogging community, does.

With Mastodon social, not only are you working with open source software, but everything is decentralized, which means you can pick what you want to see partly based on the instance you want to occupy. Mastodon uses separate instances, each with its own code of conduct, privacy options, and moderation policies. That means that when you join an instance, you're less likely to see the stuff you're not interested in and more likely to see messages from people who share your interests.

However, you can also interact with other instances. All Mastodon installs have the potential to be "federated" in what its users call the "fediverse."

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives What is the fediverse?

The fediverse is an ensemble of federated (or interconnected) servers. The word comes from the mix of "federated" and "universe." You can use this for all sorts of web publishing, from social networking to websites to file hosting. While each instance is hosted independently, they can talk to each other.

So how can I sign up for Mastodon?

First, go to Mastodon.social to sign up.

On the right-hand side of the screen, there are Sign in and Create account buttons.

Image by:

(Jess Cherry, CC BY-SA 4.0)

However, because anyone can run a Mastodon server, there are many instances, and some servers are already home to a community with interests that may align with your own. As I've said, you'll have access to the whole fediverse no matter what, but it can be nice to start on a server where people already "speak your language" (that can be literal, too, because you can add a filter to find a server in your native language).

To find a server, click the Find another server button.

Image by:

(Jess Cherry, CC BY-SA 4.0)

When you click that button, you're brought to the Join Mastodon page, with a button to list available servers.

Image by:

(Jess Cherry, CC BY-SA 4.0)

As you scroll down, you can pick a topic on the left to help you find where you would like to be hosted.

Image by:

(Jess Cherry, CC BY-SA 4.0)

I'm all about open source, so let's see what we have in the technology topic.

Image by:

(Jess Cherry, CC BY-SA 4.0)

As you can see, there's a large index with many waiting lists. In this case, it looks like Don Watkins, a fellow Opensource.com author, has chosen an instance that works for himself and our talented group. So I'll skip ahead and tell you where I'm going: There's a free open source software server known as Fosstodon, and I've chosen to sign up there so I can share my articles freely.

Here are the sign-in steps.

First, enter your information:

Image by:

(Jess Cherry, CC BY-SA 4.0)

Next, you get a message about a confirmation email:

Image by:

(Jess Cherry, CC BY-SA 4.0)

When you get to your email, click the Verify button, and the system prompts you to confirm your login information.

This server does have an application process to join. This process isn't just for safety reasons but also for privacy. Once approved, you get this amazing email!

Image by:

(Jess Cherry, CC BY-SA 4.0)

I kept my handle from other social media venues, so it's easy to move back and forth from one place to another and cross-post with replication and API calls.

Complete control

Now that I have a new profile, I can change preferences on what emails I receive, allowing for more control over what I see. This is a good way to give me more power over my media intake, and it's greatly appreciated. Once I click Preferences, Mastodon offers me cool appearance, language information, and many other options.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Next, I can click notifications and limit what I see and what I get notified for, so I can opt for less noise.

Image by:

(Jess Cherry, CC BY-SA 4.0)

This complete control of my media without algorithmic intervention is great. You can also set up featured hashtags for what you want on your profile to follow long-term projects or allow people to find you by following those hashtags. You also have the options for filters, followers, and so much more.

Final notes

This open source social media is a great way to find your group of people and broadly interact with those in a broad universe of interests. Controlling media intake is great for some balance in your life, and you can opt-in to contributing by checking the contributor rules.

In addition to the control over your own social media experience, you also gain phone apps that work on all devices, including Toot for iPhone and Tusky for Android.

Long story short: I think we should all get ready for a new open source world of social media.

Mastodon is an open source microblogging community.

Image by:

Opensource.com

Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages