The Linux Foundation

Subscribe to The Linux Foundation feed The Linux Foundation
Decentralized innovation, built on trust.
Updated: 50 min 1 sec ago

Please Join Us In The January 2022 SPDX Community SBOM DocFest

Thu, 01/13/2022 - 06:45

SPDX was designed for tools to produce and consume SBOM documents. A decade of experience has shown us that tools may interpret fields differently – a file may be a valid syntactic SPDX SBOM,  but different tools may fill in different values.  

By coming together as a community to examine the output of multiple tools and to compare/contrast the results, we can refine the guidance to tool vendors and improve the robustness of the ecosystem sharing SPDX documents.   Historically, these events were called Bake-offs, but we’ve evolved them into “DocFests.” 

After a successful SPDX 2.2 DocFest in September of 2021, the SPDX community has decided to host another DocFest on January 27th from 7-11 AM PST. The purpose of this event is to bring together producers and consumers of SPDX documents and discuss differences between tool output and understanding for the same software artifacts. 

Specifically, the goals of this DocFest are to:

  • Come to agreement on how the fields should be populated for a given artifact
  • Identify instances where different use cases might lead to different choices for fields and structures of documents
  • Assess how well the NTIA SBOM minimum elements are covered
  • Create a set of reference SPDX SBOMs as part of the corpus for further tooling evaluation.

This event will require “sweat equity” – participants who can produce SPDX documents are expected to have generated at least one SPDX document from the target set (either source, built from source, or an image/container equivalent). Participants who consume SPDX documents are expected to run at least two SPDX documents through their tooling and share any analysis results. 

Those who have signed up and have submitted files by January 21, 2022, will receive a meeting invite to the DocFest.

To indicate interest to participate, please fill in the following form no later than January 16, 2022: https://forms.gle/Mq7ReinTY6gDL4cs9

The post Please Join Us In The January 2022 SPDX Community SBOM DocFest appeared first on Linux Foundation.

Classic SysAdmin: How to Check Disk Space on Linux from the Command Line

Sat, 01/08/2022 - 03:00

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you happen to use a GUI desktop (e.g., GNOME, KDE, Mate, Pantheon, etc.), the task is probably pretty simple. But what if you’re looking at a headless server, with no GUI? Do you need to install tools for the task? The answer is a resounding no. All the necessary bits are already in place to help you find out exactly how much space remains on your drives. In fact, you have two very easy-to-use options at the ready.

In this article, I’ll demonstrate these tools. I’ll be using Elementary OS, which also includes a GUI option, but we’re going to limit ourselves to the command line. The good news is these command-line tools are readily available for every Linux distribution. On my testing system, there are a number of attached drives (both internal and external). The commands used are agnostic to where a drive is plugged in; they only care that the drive is mounted and visible to the operating system.

With that said, let’s take a look at the tools.

df

The df command is the tool I first used to discover drive space on Linux, way back in the 1990s. It’s very simple in both usage and reporting. To this day, df is my go-to command for this task. This command has a few switches but, for basic reporting, you really only need one. That command is df -H. The -H switch is for human-readable format. The output of df -H will report how much space is used, available, percentage used, and the mount point of every disk attached to your system (Figure 1).

 

Figure 1: The output of df -H on my Elementary OS system.

What if your list of drives is exceedingly long and you just want to view the space used on a single drive? With df, that is possible. Let’s take a look at how much space has been used up on our primary drive, located at /dev/sda1. To do that, issue the command:

df -H /dev/sda1

The output will be limited to that one drive (Figure 2).

Figure 2: How much space is on one particular drive?

You can also limit the reported fields shown in the df output. Available fields are:

  • source — the file system source

  • size — total number of blocks

  • used — spaced used on a drive

  • avail — space available on a drive

  • pcent — percent of used space, divided by total size

  • target — mount point of a drive

Let’s display the output of all our drives, showing only the size, used, and avail (or availability) fields. The command for this would be:

df -H --output=size,used,avail

The output of this command is quite easy to read (Figure 3).

Figure 3: Specifying what output to display for our drives.

The only caveat here is that we don’t know the source of the output, so we’d want to include source like so:

df -H --output=source,size,used,avail

Now the output makes more sense (Figure 4).

Figure 4: We now know the source of our disk usage.

du

Our next command is du. As you might expect, that stands for disk usage. The du command is quite different to the df command, in that it reports on directories and not drives. Because of this, you’ll want to know the names of directories to be checked. Let’s say I have a directory containing virtual machine files on my machine. That directory is /media/jack/HALEY/VIRTUALBOX. If I want to find out how much space is used by that particular directory, I’d issue the command:

du -h /media/jack/HALEY/VIRTUALBOX

The output of the above command will display the size of every file in the directory (Figure 5).

Figure 5: The output of the du command on a specific directory.

So far, this command isn’t all that helpful. What if we want to know the total usage of a particular directory? Fortunately, du can handle that task. On the same directory, the command would be:

du -sh /media/jack/HALEY/VIRTUALBOX/

Now we know how much total space the files are using up in that directory (Figure 6).

Figure 6: My virtual machine files are using 559GB of space.

You can also use this command to see how much space is being used on all child directories of a parent, like so:

du -h /media/jack/HALEY

The output of this command (Figure 7) is a good way to find out what subdirectories are hogging up space on a drive.

Figure 7: How much space are my subdirectories using?

The du command is also a great tool to use in order to see a list of directories that are using the most disk space on your system. The way to do this is by piping the output of du to two other commands: sort and head. The command to find out the top 10 directories eating space on a drive would look something like this:

du -a /media/jack | sort -n -r | head -n 10

The output would list out those directories, from largest to least offender (Figure 8).

Figure 8: Our top ten directories using up space on a drive.

Not as hard as you thought

Finding out how much space is being used on your Linux-attached drives is quite simple. As long as your drives are mounted to the Linux system, both df and du will do an outstanding job of reporting the necessary information. With df you can quickly see an overview of how much space is used on a disk and with du you can discover how much space is being used by specific directories. These two tools in combination should be considered must-know for every Linux administrator.

And, in case you missed it, I recently showed how to determine your memory usage on Linux. Together, these tips will go a long way toward helping you successfully manage your Linux servers.

The post Classic SysAdmin: How to Check Disk Space on Linux from the Command Line appeared first on Linux Foundation.

Classic SysAdmin: Understanding Linux File Permissions

Fri, 01/07/2022 - 03:00

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission-based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Permission Groups

Each file and directory has three user based permission groups:

  • owner – The Owner permissions apply only to the owner of the file or directory, they will not impact the actions of other users.
  • group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not affect the actions of other users.
  • all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.
Permission Types

Each file or directory has three basic permission types:

  • read – The Read permission refers to a user’s capability to read the contents of the file.
  • write – The Write permissions refer to a user’s capability to write or modify a file or directory.
  • execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.
Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the “ls -l” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

  1. User rights/Permissions
    1. The first character that I marked with an underscore is the special permission flag that can vary.
    2. The following set of three characters (rwx) is for the owner permissions.
    3. The second set of three characters (rwx) is for the Group permissions.
    4. The third set of three characters (rwx) is for the All Users permissions.
  2. Following that grouping since the integer/number displays the number of hardlinks to the file.
  3. The last piece is the Owner and Group assignment formatted as Owner:Group.
Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicitly define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

  • u – Owner
  • g – Group
  • o – Others
  • a – All users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

  • r – Read
  • w – Write
  • x – Execute

So for example, let’s say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group, and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

  • r = 4
  • w = 2
  • x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple

chown owner:group filename,

so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

  • _ – no special permissions
  • d – directory
  • l– The file or directory is a symbolic link
  • s – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.
  • t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions
Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers, you don’t think about permissions, but those environments don’t focus so aggressively on user-based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission-based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

  • home directories– The users’ home directories are important because you do not want other users to be able to view and modify the files in another user’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1’s home directory that can be done by issuing the command chmod 700 /home/user1.
  • bootloader configuration files– If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.
  • system and daemon configuration files– It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.
  • firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.

Other examples can be given, but this article is already very lengthy, so if you want to share other examples of needed restrictions please do so in the comments.

The post Classic SysAdmin: Understanding Linux File Permissions appeared first on Linux Foundation.

Classic SysAdmin: How to Move Files Using Linux Commands or File Managers

Thu, 01/06/2022 - 03:00

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same simple tasks begin to require a small portion of your brain’s power to complete. One such task is moving files from one location to another. Sure, it’s most often considered one of the more rudimentary actions to be done on a computer. When you move to the Linux platform, however, you may find yourself asking “Now, how do I move files?”

If you’re familiar with Linux, you know there are always many routes to the same success. Moving files is no exception. You can opt for the power of the command line or the simplicity of the GUI – either way, you will get those files moved.

Let’s examine just how you can move those files about. First, we’ll examine the command line.

Command line moving

One of the issues so many users new to Linux face is the idea of having to use the command line. It can be somewhat daunting at first. Although modern Linux interfaces can help to ensure you rarely have to use this “old school” tool, there is a great deal of power you would be missing if you ignored it altogether. The command for moving files is a perfect illustration of this.

The command to move files is mv. It’s very simple and one of the first commands you will learn on the platform. Instead of just listing out the syntax and the usual switches for the command – and then allowing you to do the rest – let’s walk through how you can make use of this tool.

The mv command does one thing – it moves a file from one location to another. This can be somewhat misleading because mv is also used to rename files. How? Simple. Here’s an example. Say you have the file testfile in /home/jack/ and you want to rename it to testfile2 (while keeping it in the same location). To do this, you would use the mv command like so:

mv /home/jack/testfile /home/jack/testfile2

or, if you’re already within /home/jack:

mv testfile testfile2

The above commands would move /home/jack/testfile to /home/jack/testfile2 – effectively renaming the file. But what if you simply wanted to move the file? Say you want to keep your home directory (in this case /home/jack) free from stray files. You could move that testfile into /home/jack/Documents with the command:

mv /home/jack/testfile /home/jack/Documents/

With the above command, you have relocated the file into a new location, while retaining the original file name.

What if you have a number of files you want to move? Luckily, you don’t have to issue the mv command for every file. You can use wildcards to help you out. Here’s an example:

You have a number of .mp3 files in your ~/Downloads directory (~/ – is an easy way to represent your home directory – in our earlier example, that would be /home/jack/) and you want them in ~/Music. You could quickly move them with a single command, like so:

mv ~/Downloads/*.mp3 ~/Music/

That command would move every file that ended in .mp3 from the Downloads directory, and move them into the Music directory.

Should you want to move a file into the parent directory of the current working directory, there’s an easy way to do that. Say you have the file testfile located in ~/Downloads and you want it in your home directory. If you are currently in the ~/Downloads directory, you can move it up one folder (to ~/) like so:

mv testfile ../ 

The “../” means to move the folder up one level. If you’re buried deeper, say ~/Downloads/today/, you can still easily move that file with:

mv testfile ../../

Just remember, each “../” represents one level up.

As you can see, moving files from the command line isn’t difficult at all.

GUI

There are a lot of GUIs available for the Linux platform. On top of that, there are a lot of file managers you can use. The most popular file managers are Nautilus (GNOME) and Dolphin (KDE). Both are very powerful and flexible. I want to illustrate how files are moved using the Nautilus file manager.

Nautilus has probably the most efficient means of moving files about. Here’s how it’s done:

  1. Open up the Nautilus file manager.
  2. Locate the file you want to move and right-click said file.
  3. From the pop-up menu (Figure 1) select the “Move To” option.
  4. When the Select Destination window opens, navigate to the new location for the file.
  5. Once you’ve located the destination folder, click Select.

This context menu also allows you to copy the file to a new location, move the file to the Trash, and more.

If you’re more of a drag and drop kind of person, fear not – Nautilus is ready to serve. Let’s say you have a file in your home directory and you want to drag it to Documents. By default, Nautilus will have a few bookmarks in the left pane of the window. You can drag the file into the Document bookmark without having to open a second Nautilus window. Simply click, hold, and drag the file from the main viewing pane to the Documents bookmark.

If, however, the destination for that file is not listed in your bookmarks (or doesn’t appear in the current main viewing pane), you’ll need to open up a second Nautilus window. Side by side, you can then drag the file from the source folder in the original window to the destination folder in the second window.

If you need to move multiple files, you’re still in luck. Similar to nearly every modern user interface, you can do a multi-select of files by holding down the Ctrl button as you click each file. After you have selected each file (Figure 2), you can either right-click one of the selected files and then choose the Move To option, or just drag and drop them into a new location.

The selected files (in this case, folders) will each be highlighted.

Moving files on the Linux desktop is incredibly easy. Either with the command line or your desktop of choice, you have numerous routes to success – all of which are user-friendly and quick to master.

The post Classic SysAdmin: How to Move Files Using Linux Commands or File Managers appeared first on Linux Foundation.

Classic SysAdmin: How to Search for Files from the Linux Command Line

Wed, 01/05/2022 - 03:00

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic  structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

  • name – case sensitive

  • iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

  • f – regular file

  • d – directory

  • l – symbolic link

  • c – character devices

  • b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6 /dev/hidraw5 /dev/vboxnetctl /dev/vboxdrvu /dev/vboxdrv /dev/dmmidi2 /dev/midi2 /dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name "*.conf"

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

 

Figure 1: Locating all of your configuration files in /etc.

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

  • c – bytes

  • k – Kilobytes

  • M – Megabytes

  • G – Gigabytes

  • b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

The post Classic SysAdmin: How to Search for Files from the Linux Command Line appeared first on Linux Foundation.

A New Year’s Message from Nithya Ruff (2022)

Wed, 01/05/2022 - 00:00

The last two years have demonstrated even more clearly that technology is the crucial fabric that weaves society and the economy together. From video conferencing to online shopping and delivery to remote collaboration tools for work, technology helped society continue to function throughout the pandemic in 2020 and the continuing uncertainty of 2021. All that technology (and more) is powered, quite literally, by open source, in one way or another. Software is eating the world and open source software is becoming the dominant part of software, from the operating system to the database and messaging layer up to the frameworks that drive the user experience. Few, if any, organizations and enterprises could run operations today without relying on open source.

Not surprisingly, as it becomes more pervasive and mission-critical, open source is also proving to be a larger economic force. Public and private companies focused on selling open source software or services now have a collective market value approaching half a trillion dollars. There is no easy way to account for the total economic value of open source consumed by all businesses, individuals, nonprofits, and governments; the value enabled is likely well into the trillions of dollars. Open source powers cloud computing, the Internet, Android phones, mobile apps, cars — even the Mars helicopter launched by NASA. Open source also powers much of consumer electronics on the market today. 

With prominent positions in society and the economy comes an urgent imperative to better address risk and security. The Linux Foundation is working with its members to take on these challenges for open source. We launched multiple new initiatives in 2021 to make the open source technology ecosystem and software supply chain more secure, transparent, and resilient. From the Software Bill of Materials to the Open Source Security Foundation, the LF, its members, and its projects and communities are collaborating with paramount importance to secure the open source supply chain.

Behind risk management and security considerations — and technology development in general — are real people. This is also why the Linux Foundation is making substantial investments in supporting diversity and inclusion in open source communities. We need to take action as a community to make open source more inclusive and welcoming. We can do this in a data-driven fashion with research on what issues hinder our progress and develop actions that will, we hope, drive measurable improvements. 

Working together on collective efforts, beyond just our company and ourselves, is not just good for business; it is personally rewarding. Recently one of our engineers explained that he loves working with open source because he feels it gives him a global network of teachers, all helping him become better. I believe this is why open source is one of the most powerful forces in the world today, and it is only growing stronger. Through a pandemic, through economic challenges, day in, day out, we see people helping each other regardless of their demographics. Open source brings out the best in people by encouraging them to work together to solve great challenges and dream big. This is why in 2021, I am excited to see all the new collaborations, expanding our collective efforts to address truly global problems in agriculture, public health, and other areas that are far bigger than any one project. 

After a successful 2021, and hopefully, with a pandemic fading into our rearview mirrors, I am optimistic for an even more amazing 2022. Thank you for your support and guidance, and I wish you all a Happy New Year!

Nithya Ruff
Chair of the Board of Directors, The Linux Foundation

These efforts are made possible by our members and communities. To learn how your organization can get involved with the Linux Foundationclick here.

The post A New Year’s Message from Nithya Ruff (2022) appeared first on Linux Foundation.

Classic SysAdmin: How to Kill a Process from the Linux Command Line

Tue, 01/04/2022 - 03:00

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Picture this: You’ve launched an application (be it from your favorite desktop menu or from the command line) and you start using that launched app, only to have it lock up on you, stop performing, or unexpectedly die. You try to run the app again, but it turns out the original never truly shut down completely.

What do you do? You kill the process. But how? Believe it or not, your best bet most often lies within the command line. Thankfully, Linux has every tool necessary to empower you, the user, to kill an errant process. However, before you immediately launch that command to kill the process, you first have to know what the process is. How do you take care of this layered task? It’s actually quite simple…once you know the tools at your disposal.

Let me introduce you to said tools.

The steps I’m going to outline will work on almost every Linux distribution, whether it is a desktop or a server. I will be dealing strictly with the command line, so open up your terminal and prepare to type.

Locating the process

The first step in killing the unresponsive process is locating it. There are two commands I use to locate a process: top and ps. Top is a tool every administrator should get to know. With top, you get a full listing of currently running process. From the command line, issue top to see a list of your running processes (Figure 1).

Figure 1: The top command gives you plenty of information.

From this list you will see some rather important information. Say, for example, Chrome has become unresponsive. According to our top display, we can discern there are four instances of chrome running with Process IDs (PID) 3827, 3919, 10764, and 11679. This information will be important to have with one particular method of killing the process.

Although top is incredibly handy, it’s not always the most efficient means of getting the information you need. Let’s say you know the Chrome process is what you need to kill, and you don’t want to have to glance through the real-time information offered by top. For that, you can make use of the ps command and filter the output through grep. The ps command reports a snapshot of a current process and grep prints lines matching a pattern. The reason why we filter ps through grep is simple: If you issue the ps command by itself, you will get a snapshot listing of all current processes. We only want the listing associated with Chrome. So this command would look like:

ps aux | grep chrome

The aux options are as follows:

  • a = show processes for all users

  • u = display the process’s user/owner

  • x = also show processes not attached to a terminal

The x option is important when you’re hunting for information regarding a graphical application.

When you issue the command above, you’ll be given more information than you need (Figure 2) for the killing of a process, but it is sometimes more efficient than using top.

Figure 2: Locating the necessary information with the ps command. Killing the process

Now we come to the task of killing the process. We have two pieces of information that will help us kill the errant process:

  • Process name
  • Process ID

Which you use will determine the command used for termination. There are two commands used to kill a process:

  • kill – Kill a process by ID
  • killall – Kill a process by name

There are also different signals that can be sent to both kill commands. What signal you send will be determined by what results you want from the kill command. For instance, you can send the HUP (hang up) signal to the kill command, which will effectively restart the process. This is always a wise choice when you need the process to immediately restart (such as in the case of a daemon). You can get a list of all the signals that can be sent to the kill command by issuing kill -l. You’ll find quite a large number of signals (Figure 3).

Figure 3: The available kill signals.

The most common kill signals are:

Signal Name

Single Value

Effect

SIGHUP

1

Hangup

SIGINT

2

Interrupt from keyboard

SIGKILL

9

Kill signal

SIGTERM

15

Termination signal

SIGSTOP

17, 19, 23

Stop the process

What’s nice about this is that you can use the Signal Value in place of the Signal Name. So you don’t have to memorize all of the names of the various signals.
So, let’s now use the kill command to kill our instance of chrome. The structure for this command would be:

kill SIGNAL PID

Where SIGNAL is the signal to be sent and PID is the Process ID to be killed. We already know, from our ps command that the IDs we want to kill are 3827, 3919, 10764, and 11679. So to send the kill signal, we’d issue the commands:

kill -9 3827 kill -9 3919 kill -9 10764 kill -9 11679

Once we’ve issued the above commands, all of the chrome processes will have been successfully killed.

Let’s take the easy route! If we already know the process we want to kill is named chrome, we can make use of the killall command and send the same signal the process like so:

killall -9 chrome

The only caveat to the above command is that it may not catch all of the running chrome processes. If, after running the above command, you issue the ps aux|grep chrome command and see remaining processes running, your best bet is to go back to the kill command and send signal 9 to terminate the process by PID.

Ending processes made easy

As you can see, killing errant processes isn’t nearly as challenging as you might have thought. When I wind up with a stubborn process, I tend to start off with the killall command as it is the most efficient route to termination. However, when you wind up with a really feisty process, the kill command is the way to go.

The post Classic SysAdmin: How to Kill a Process from the Linux Command Line appeared first on Linux Foundation.

Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble

Fri, 12/17/2021 - 11:08
Brian Behlendorf

As someone who has spent their entire career in open source software (OSS), the Log4Shell scramble (an industry-wide four-alarm-fire to address a serious vulnerability in the Apache Log4j package) is a humbling reminder of just how far we still have to go. OSS is now central to the functioning of modern society, as critical as highway bridges, bank payment platforms, and cell phone networks, and it’s time OSS foundations started to act like it.

Organizations like the Apache Software Foundation, the Linux Foundation, the Python Foundation, and many more, provide legal, infrastructural, marketing and other services for their communities of OSS developers. In many cases the security efforts at these organizations are under-resourced and hamstrung in their ability to set standards and requirements that would mitigate the chances of major vulnerabilities, for fear of scaring off new contributors. Too many organizations have failed to apply raised funds or set process standards to improve their security practices, and have unwisely tilted in favor of quantity over quality of code.

What would “acting like it” look like? Here are a few things that OSS foundations can do to mitigate security risks:

  1. Set up an organization-wide security team to receive and triage vulnerability reports, as well as coordinate responses and disclosures to other affected projects and organizations.
  2. Perform frequent security scans, through CI tooling, for detecting unknown vulnerabilities in the software and recognizing known vulnerabilities in dependencies.
  3. Perform occasional outside security audits of critical code, particularly before new major releases.
  4. Require projects to use test frameworks, and ensure high code coverage, so that features without tests are discouraged and underused features are weeded out proactively.
  5. Require projects to remove deprecated or vulnerable dependencies. (Some Apache projects are not vulnerable to the Log4j v2 CVE, because they are still shipping with Log4j v1, which has known weaknesses and has not received an update since 2015!)
  6. Encourage, and then eventually require, the use of SBOM formats like SPDX to help everyone track dependencies more easily and quickly, so that vulnerabilities are easier to find and fix.
  7. Encourage, and then eventually require, maintainers to demonstrate familiarity with the basics of secure software development practices.

Many of these are incorporated into the CII Best Practices badge, one of the first attempts to codify these into an objective comparable metric, and an effort that has now moved to OpenSSF. The OpenSSF has also published a free course for developers on how to develop secure software, and SPDX has recently been published as an ISO standard.

None of the above practices is about paying developers more, or channeling funds directly from users of software to developers. Don’t get me wrong, open source developers and the people who support them should be paid more and appreciated more in general. However, it would be an insult to most maintainers to suggest that if you’d just slipped more money into their pockets they would have written more secure code. At the same time, it’s fair to say a tragedy-of-the-commons hits when every downstream user assumes that these practices are in place, being done and paid for by someone else.

Applying these security practices and providing the resources required to address them is what foundations are increasingly expected to do for their community. Foundations should begin to establish security-related requirements for their hosted and mature projects. They should fundraise from stakeholders the resources required for regular paid audits for their most critical projects, scanning tools and CI for all their projects, and have at least a few paid staff members on a cross-project security team so that time-critical responses aren’t left to individual volunteers. In the long term, foundations should consider providing resources to move critical projects or segments of code to memory-safe languages, or fund bounties for more tests.

The Apache Software Foundation seems to have much of this right, let’s be clear. Despite being notified just before the Thanksgiving holiday, their volunteer security team worked with the Log4j maintainers and responded quickly. Log4j also has almost 8000 passing tests in its CI pipeline, but even all that testing didn’t catch the way this vulnerability could be exploited. And in general, Apache projects are not required to have test coverage at all, let alone run the kind of SAST security scans or host third party audits that might have caught this.

Many other foundations, including those hosted at the Linux Foundation, also struggle to do all this – this is not easy to push through the laissez-faire philosophy that many foundations have regarding code quality, and third-party code audits and tests don’t come cheap. But for the sake of sustainability, reducing the impact on the broader community, and being more resilient, we have got to do better. And we’ve got to do this together, as a crisis of confidence in OSS affects us all.

This is where OpenSSF comes in, and what pulled me to the project in the first place. In the new year you’ll see us announce a set of new initiatives that build on the work we’ve been doing to “raise the floor” for security in the open source community. The only way we do this effectively is to develop tools, guidance, and standards that make adoption by the open source community encouraged and practical rather than burdensome or bureaucratic. We will be working with and making grants to other open source projects and foundations to help them improve their security game. If you want to stay close to what we’re doing, follow us on Twitter or get involved in other ways. For a taste of where we’ve been to date, read our segment in the Linux Foundation Annual Report, or watch our most recent Town Hall.

Hoping for a 2022 with fewer four alarm fires,

Brian

Brian Behlendorf is General Manager of the Linux Foundation’s Open Source Security Foundation (OpenSSF). He was a founding member of the Apache Group, which later became the Apache Software Foundation, and served as president of the foundation for three years.

The post Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble appeared first on Linux Foundation.

OSPOlogy: Learnings from OSPOs in 2021

Fri, 12/17/2021 - 00:50

A wide range of open source topics essential for OSPO related activities occurred in 2021, featured by OS experts coming from matured OSPOs like Bloomberg or RIT and communities behind open source standards like OpenChain or CHAOSS.

The TODO Group has been paving the OSPO path over a decade of change and is now composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level. 

The TODO Group Member Landscape

One of the many initiatives that the TODO Group has been working on since last August has been OSPOLogy. With OSPOLogy, the TODO Group aims to ease the access to more organizations across sectors to understand and adopt OSPOs by open and transparent networking: engaging with open source leaders through real-time conversations. 

“In OSPOLogy, we have have the participation of experienced OSPO leaders like Bloomberg, Microsoft or SAP, widely adopted project/Initiatives such as OpenChain, CHAOSS or SPDX, and industry open source specialists like LF Energy or FINOS. There is a huge diversity of folks in the open source ecosystem that help people and organizations to improve their Open Source Programs, their OSPO management skills, or advance in their OSPO careers. Thus, after listening to the community demands, we decided to offer a space with dedicated resources to make these connections happen, under an open governance model designed to encourage other organizations and communities to contribute.”

AJ – OSPO Program Manager at TODO Group What has OSPOlogy accomplished so far?

Within OSPOlogy 2021 series, we had insightful discussions coming from five different OSPO topics:

For more information, please watch the video replays on our OSPOlogy YouTube channel here

The format is pretty simple: OSPOlogy kicks off the meetings with the OSPO news happening worldwide during that month and moves to the topic of the day where featured guests introduce a topic relevant to OSPO and ways to set up open source initiatives. These two sections are recorded and published within the LF Community platform and the new OSPOlogy youtube channel.

Once the presentation finishes, we stop the recording and move to real-time conversations and Q&A section under Chatham house rules in order to keep a safe environment for the community to freely share their opinions and issues.

“One of the biggest challenges when preparing the 2021 agenda was to get used to the new platform used to host these meetings and find contributors to kick off the initiative. We keep improving the quality and experience of these meetings every month and thanks to the feedback received by the community, building new stuff for 2022”

AJ – OSPO Program Manager at TODO Group TODO Mission: build the next OSPOlogy 2022 series together

The TODO Group gives big importance to neutrality. That’s why this project (same as the other TODO projects) is under an open governance model, to allow people from other organizations and peers across sectors to freely contribute and grow this initiative together.

OSPOlogy  has a planning doc, governance guidelines, and a topic pool agenda to:

  • Propose new topics
  • Offer to be a moderator
  • Become speaker

https://github.com/todogroup/ospology/tree/main/meetings.

“During the past months, we have been reaching out to other communities like FINOS, LF Energy, OpenChain, SPDX, or CHAOSS. These projects have become of vital importance to many OSPO activities (either for specific activities, such as managing Open Source Compliance & ISO Standards, measuring the impact of relevant open source projects or helping to overcome entry barriers for more traditional sectors, like finance or energy industry)” 

OSPOlogy, along with the TODO Associates program, aims to bring together all these projects to introduce them to the OSPO community and drive insightful discussions. These are some of the topics proposed by the community for 2022:

  • How to start an OSPOs within the Energy sector
  • How to start an OSPOs within the Finance sector
  • Measuring the impact of the open source projects that matters to your organization
  • Open Source Compliance best practices in the lens of an OSPO

OSPOlogy is not just limited to LF projects and the TODO Community. Outside initiatives, foundations, or vendors that work closely with OSPOs and help the OSPO movement are also welcome to join.

We have just created a CFP form so people can easily add their OSPO topics for upcoming OSPOlogy sessions:

https://github.com/todogroup/ospology/blob/main/.github/ISSUE_TEMPLATE/call-for-papers.yml

In order to propose a topic, interested folks just need to open an issue using the call for papers GitHub form.

The TODO Group’s journey: Paving the OSPO path over a decade of change

Significant advancements and community shifts have occurred since (the year when TODO Group was formed) in the open source ecosystem and the way organizations advance in their open source journey. By that time, most of the OSPOs were gathered in the bay area and led by software companies, requesting to share limited information due to the uncertainty across this industry. 

OSPO Maturity Levels

However, this early version of TODO is far behind what it  (and OSPOs) represent in the present day.

With digital transformation forcing all organizations to be open source forward and OSPOs adopted by multiple sectors, the TODO Group is composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level.

It is well known that the TODO group members are also OSPO mentors and advocates who have been working in the open source industry for years.

At TODO group, we know the huge value these experienced OSPO leaders can bring to the community since they can help to pave the path for the new generation of OSPOs, cultivating the open source ecosystem. Two main challenges mark 2022:

  1. Provide Structure and Guidance within the OSPO Industry based on the experience of Mature OSPO professionals across sectors and stages.
  2. Collaborate with other communities to enhance this guidance

New OSPO challenges are coming, and new TODO milestones and initiatives are taking shape to adapt to help the OSPO movement succeed across sectors. You will hear from TODO 2022 strategic goals and direction news very soon!

The post OSPOlogy: Learnings from OSPOs in 2021 appeared first on Linux Foundation.

A 2021 Linux Foundation Research Year in Review

Thu, 12/16/2021 - 22:00

Through LF Research, the Linux Foundation is uniquely positioned to create the definitive repository of insights into open source. By engaging with our community members and leveraging the full resources of our data sources, including a new and improved LFX, we’re not only shining a light on the scope of the projects that comprise much of the open source paradigm but contextualizing their impact. In the process, we’re creating both a knowledge hub and an ecosystem-wide knowledge network. Because, after all, research is a team sport.

Taking inspiration from research on open innovation, LF Research will explore open source amidst the challenges of the current era. These include challenges like the COVID-19 pandemic, climate risk, and accelerating digital transformation — all changing what it means to be a technology company or an organization that deeply relies on innovation. By publishing a new suite of research deliverables that aid in strategy formation and decision-making, LF Research intends to create shared value for all stakeholders in our community and inspire greater levels of participation in it. 

Completed Core Research
  • The 2021 Linux Foundation Report on Diversity, Equity, and Inclusion in Open Source, produced in partnership with AWS, CHAOSS, Comcast, Fujitsu, GitHub, GitLab, Hitachi, Huawei, Intel, NEC, Panasonic, Red Hat, Renesas, and VMware, seeks to understand the demographics and dynamics concerning overall participation in open source communities and to identify gaps to be addressed, all as a means to advancing inclusive cultures within open source environments. This research aims to drive data-driven decisions on future programming and interventions to benefit the people who develop and ultimately use open source technologies. Enterprise Digital Transformation, Techlash, Political Polarization, Social Media Ecosystem, and Content Moderation are all cited as trends that have exposed and amplified exclusionary narratives and designs, mandating increased awareness, and recalibrating individual and organizational attention. Beyond the survey findings that identify the state of DEI, this research explores a number of DEI initiatives and their efficacy and recommends action items for the entire stakeholder ecosystem to further their efforts and build inclusion by design.
Core Research in Progress
  • The Software Bill of Materials (SBOM) Readiness Survey (estimated release: Q1 2022), produced in partnership with the Open Source Security Foundation, OpenChain, and SPDX, is the Linux Foundation’s first project in a series designed to explore ways to better secure the software supply chains. With a focus on SBOMs, the findings are based on a worldwide survey of IT professionals who understand their organization’s approach to software development, procurement, compliance, or security. An important driver for this survey is the recent U.S. Executive Order on Cybersecurity, which focuses on producing and consuming SBOMs. 
Completed Project-Focused Research
  • The Fourth Annual Open Source Program Management (OSPO) Survey, produced In collaboration with the TODO Group and The New Stack, examines the prevalence and outcomes of open source programs, including the key benefits and barriers to adoption.
  • The 2021 State of Open Source in Financial Services Report produced in partnership with FINOS, Scott Logic, Wipro, and GitHub, explores the state of open source in the financial services sector. The report identifies current levels of consumption and contribution of open source software and standards in this industry and the governance, cultural, and aspirational issues of open source among banks, asset managers, and hedge funds.
  • The 2021 Data and Storage Trends Survey, produced in collaboration with the SODA Foundation, identifies the current challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G.
  • The 9th Annual Open Source Jobs Report, produced in partnership with edX, provides actionable insights on the state of open source talent that employers can use to inform their hiring, training, and diversity awareness efforts.

The post A 2021 Linux Foundation Research Year in Review appeared first on Linux Foundation.

Thanking our Communities and Members, and Building Positive Momentum in 2022

Wed, 12/15/2021 - 01:00

We could not imagine what was on the horizon ahead of us as we saw COVID peek its head in late 2019. Locally and globally, we’ve weathered many challenges, adjusted our sails, and applied new tools and approaches to continue our momentum. As we now approach 2022, our hopes aim even higher as we pursue new horizons and strengthen our established communities. We’re emerging stronger and better equipped to tackle these great challenges and your help has made it all possible. 

Your willingness to engage in our local, virtual, and large-scale in-person events were invaluable. These meetings demonstrated that the bonds within our hosted communities and families of open source foundations remain strong. Thank you for coming back to the events and making them successful.

In 2021, we continued to see organizations embrace open collaboration and open source principles, accelerating new innovations, approaches, and best practices. Not only have we seen compelling new project additions this year, but these projects are bringing new organizations into our community. In 2021, the LF welcomed a new organization nearly every day.

As we look to 2022, we see a diverse and growing pipeline of new projects across open source and standards. We see new demand to guide and develop projects in 5G, supply chain security, open data, and open governance networks. Throughout the continuing challenges of 2021, we remain focused on open collaboration as the means for enabling the technologies and solutions of the future. 

We thank our communities and members for your continued confidence in our ability to navigate a challenging business environment and your lasting and productive partnerships. We wish you prosperity and success in 2022.

Our yearly achievements would not be possible without the efforts of the Linux Foundation’s communities and members. Read our 2021 Annual Report here.

The post Thanking our Communities and Members, and Building Positive Momentum in 2022 appeared first on Linux Foundation.

Addressing Diversity, Equity, and Inclusion in 2021 and Beyond

Tue, 12/14/2021 - 22:00

In 2021‭, ‬we continued to double down on our commitment to enact positive change for underrepresented and marginalized people by introducing new and progressing existing programs for inclusivity‭, ‬racial justice‭, ‬and diversity‭.‬

The LF is Committed to Building Diverse and Inclusive Communities

Unique ideas and contributions — that originate from a diverse community, from all walks of life, cultures, countries, and skin colors — are vital for building sustainable and healthy open source communities. Individuals from diverse backgrounds inject new and innovative ideas to advance an inclusive and welcoming ecosystem for all.

Creating diverse communities requires effort and commitment. The Linux Foundation is addressing the need to build inclusive and welcoming spaces through various initiatives, including some of those expanded upon below.

LF Research Publishes 2021‭ ‬Open Source Diversity‭, ‬Equity‭, ‬and Inclusion Study

The Linux Foundation has put diversity, equity, and inclusion (DEI) at the top of its inaugural research agenda, and for a good reason. It is the social imperative of our time. New research identifies the state of DEI in open source communities, the challenges and opportunities within them, and draws conclusions around what initiatives are helpful and where we need to do more collectively. 

Earlier this year, we engaged member organizations from the Linux Foundation Board to provide financial support for survey translation into ten different languages and enable further qualitative research to be conducted for a richer perspective. LF Research is grateful to AWS, CHAOSS, Comcast, Fujitsu, GitHub, GitLab, Hitachi, Huawei, Intel, NEC, Panasonic, Red Hat, Renesas, and VMware for their support and leadership in this important piece of research.

We are also grateful to the members of our community who participated in the DEI survey. In addition, more than two dozen individuals across the open source community participated in interviews with the research team adding further insight to the survey findings.

The research shows that while a majority of respondents feel welcome in open source, many in underrepresented communities do not. We hope that the data and insights that this project provides will be a catalyst for strengthening existing DEI initiatives and creating new ones. 

Download and Read The 2021 Linux Foundation Report on Diversity, Equity, and Inclusion in Open Source

Inclusive Language Efforts Continue

Communities that adopt inclusive language and actions will be able to attract and retain individuals from diverse backgrounds. The Linux kernel community adopted inclusive language in the Linux 5.8 release, showing its commitment to Diversity and Inclusion. 

For other projects, the Inclusive Naming Initiative launched at KubeCon North America to standardize inclusive language across the industry. It released a training course, LFC103: Inclusive Strategies for Open Source Communities, to support this.

Software Developer Diversity and Inclusion Project

We are also focusing on Science and Research to Advance Diversity and Inclusion in Software Engineering. Our new Software Developer Diversity and Inclusion (SDDI) project will draw on science and research to deliver resources and best practices in increasing diversity in software engineering. 

Open Hardware‭ ‬Diversity Alliance‭ ‬

The Open Hardware Diversity Alliance is a RISC-V incubating project with the mission of bringing together the open hardware community to provide programs, networking opportunities, and learning to encourage participation and support to the professional advancement of women and underrepresented individuals in open source hardware.

Diversity, Equity,‭ ‬and Inclusion‭ ‬Micro-Conference

Creating diverse communities requires effort and commitment to creating inclusive and welcoming spaces. Recognizing that communities that adopt inclusive language and actions attract and retain more individuals from diverse backgrounds, the Linux kernel community adopted inclusive language in the Linux 5.8 release. Understanding if this sort of change has been effective is a topic of active research. The Diversity, Equity, and Inclusion Micro-Conference at Linux Plumbers Conference 2021 took the pulse of the Linux kernel community as it turned 30 this year and discussed some next steps. Experts from the DEI research community shared their perspectives and preliminary research with Linux community members.

A multifaceted discussion on various research topics related to diversity was informative. A few takeaways are:

  • Diversity spans geography, gender, and language.
  • Inclusive language efforts have to take language barriers into account.
  • Implicit and explicit mentoring efforts help attract developers from diverse backgrounds.
  • Mentoring programs with opportunity to work with experts are successful in attracting developers from diverse backgrounds.

The challenges to work on:

  • How do we retain new developers?
  • How do we evolve new developers into maintaining code?
LFX Mentorships

As we look back at the year, the LFX Mentorship program will wrap 2021 with 23 new Linux kernel developers, 181 new open source developers across all LFX projects, and 5285 received applications. We started the LFX Mentorship program in 2019 with just three new developers, and we’ve come a long way since then. As we look back at the year, the LFX Mentorship program will wrap 2021 with 23 new Linux kernel developers and 181 new open source developers across all LFX projects, with 5285 applications received.

The LF Mentorship program, with the help from the Event teams, reached out to Historically Black Colleges (HBCUs) and colleges with a larger number of Hispanic students before the Summer session and to all 2021 applicants to get feedback on the programs and platform.

We have had limited success from the first reach out in attracting and selecting applicants, and the second one was successful. Here is what people had to say about what attracted them to our program:

The top two responses tied at 83%:

  • Ability to work 1:1 with experienced open source contributors.
  • Opportunity to experiment and ability to learn to contribute effectively to current open source projects.

The opportunity to facilitate jobs and internships came in second place with 55%, and paid opportunities came in third place at 49%.

The important takeaways are that the program offers the ability to work with experts and the opportunity to experiment. A few mentioned that the program’s emphasis on support for students and developers who are entirely new to open source is why they applied, aligning with the program’s goals and objectives.

Learn more about LFX Mentorships at https://lfx.linuxfoundation.org/tools/mentorship/

Mentorship + Events

The LFX Mentorship program and the LF Events teams collaborated with 22 experts in the open source communities to provide unstructured self-learning resources under the LF Live Mentorship Series umbrella. The series provides expert knowledge and valuable interactive discussion across various topics related to the Linux Kernel and other OS projects, primarily development. We made these 22 webinars available for free, and we will conclude this year with two more. We thank all our mentors for taking the time to share their knowledge and expertise.

Let’s take a look at how these programs enable new developers to find jobs and career opportunities. You can read the stories of Linux Kernel Mentorship program graduates breaking the open source glass ceiling by Nithya Ruff and Jennifer Cloer.

We are also planning to reach out to all our graduates since the inception of this program in 2019. The goal is to see where their open source journeys took them after graduating, and we will share the results.

The LFX Mentorship and LF Events team collaborated on a Mentee Showcase to connect our graduates with prospective employers from our member companies. In this virtual event, mentees will share their accomplishments with others. There are plenty of open source jobs, and employers are looking for talent. Additionally, this event allows us to thank our mentors who share their knowledge to train new talent. Some of our mentors do this in their spare time without expectations. We are hoping to make this an annual event.

A recent Linux kernel community research confirmed the busy maintainer problem we talked about for a couple of years. Next year, this is one area of focus to add mentorship projects and webinars to provide resources to develop maintainer talent within open source communities.

As we talk about the stats and numbers, let’s not lose sight of the big picture. It’s all about:

  • Making a difference and empowering people by offering both structured and unstructured learning opportunities. 
  • We are paying them to learn and making the resources available for free and accessible to all.
  • Developing new talent and making the new talent available to the Linux ecosystem. 
  • Helping build communities to continue developing open source code to keep the Linux ecosystem healthy and sustainable.
Addressing Racial Justice Efforts Through Code

In February of 2021, the Linux Foundation announced it would host seven Call for Code for Racial Justice projects, an initiative driven by IBM and Creator David Clark Cause to urge the global developer ecosystem and open source community to contribute to solutions that can help confront racial inequalities. These include two new cloud-based Solution Starter applications:

  • Fair Change is a platform to help record, catalog, and access evidence of potentially racially charged incidents to help enable transparency, reeducation, and reform as a matter of public interest and safety. 
  • TakeTwo aims to help mitigate bias in digital content, whether overt or subtle, focusing on text across news articles, headlines, web pages, blogs, and even code. 

In addition to the two new apps, the Linux Foundation now hosts five evolving open source projects from Call for Code for Racial Justice:

  • Five Fifths Voter: This web app empowers minorities to exercise their right to vote and helps ensure their voice is heard by determining optimal voting strategies and limiting suppression issues.
  • Legit-Info: Local legislation can significantly impact areas as far as jobs, the environment, and safety. Legit-Info helps individuals understand the legislation that shapes their lives.
  • Incident Accuracy Reporting System: This platform allows witnesses and victims to corroborate evidence or provide additional information from multiple sources against an official police report.
  • Open Sentencing: To help public defenders better serve their clients and make a stronger case, Open Sentencing shows racial bias in data such as demographics.
  • Truth Loop: This app helps communities understand the policies, regulations, and legislation that will most impact them. 

The post Addressing Diversity, Equity, and Inclusion in 2021 and Beyond appeared first on Linux Foundation.

Linux Foundation Research Reveals New Open Source Diversity, Equity, and Inclusion Trends

Tue, 12/14/2021 - 22:00

Eighty-two percent of respondents to global survey feel welcome in the open source community, while barriers to participation include time, personal background, and some exclusionary behaviors 

SAN FRANCISCO, Calif., December 14, 2021 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the release of its latest LF Research study, “Diversity, Equity, and Inclusion in Open Source.” 

The study, which includes the results of both qualitative interviews and a worldwide survey with more than 7,000 initial responses from the open source community, was created to increase the industry’s collective understanding of the state of Diversity, Equity, and Inclusion (DEI) in open source and to inform important DEI practices. The sponsors of this research include Amazon Web Services (AWS), CHAOSS Community, Comcast, Fujitsu, GitHub, GitLab, Hitachi, Huawei, Intel, NEC, Panasonic, Red Hat, Renesas, and VMware.

“The open source community is growing at an unprecedented pace and it’s imperative that we understand that growth in the context of diversity, equity. and inclusion so that we can collectively implement best practices that result in inclusive communities,” said Hilary Carter, Vice President of Research at the Linux Foundation. “The Diversity, Equity, and Inclusion in Open Source study gives us valuable insights that can lead to a more diverse global open source community.”

Study after study has revealed that diversity among technology builders leads to better, more robust technologies. But the industry continues to struggle with increasing diversity, and the open source software community is no exception. Building and sustaining inclusive communities can attract a more diverse talent pool and prioritizes the next generation of open source technologies. The Linux Foundation’s Diversity, Equity, and Inclusion in Open Source study aims to identify the state of DEI in open source communities, the challenges and opportunities within them, and draw conclusions around creating improvements in much-needed areas.

“Understanding data behind Diversity, Equity, and Inclusion in the open source community allows us to identify areas for focus and improvement. The open source community will greatly benefit from the actions we take to grow engagement and make it a welcoming place for everyone,” said Nithya Ruff, Comcast Fellow, Head of Comcast Cable Open Source Program Office, and Linux Foundation board chair.

Key findings from the study include: 

  • Eighty-two percent of respondents feel welcome in open source, but different groups had different perspectives overall. The 18 percent of those that do not feel welcome are from disproportionately underrepresented groups: people with disabilities, transgender people, and racial and ethnic minorities in North America. 
  • Increasing open source diversity reflects growing global adoption, but there is still much room to improve. 

As the global adoption of open source technologies grows rapidly, so, too, is diversity within open source communities. But there remains a lot of room for growth: 82 percent of respondents identify as male, 74 percent identify as heterosexual, and 71 percent are between the ages of 25-54. 

  • Time is a top determinant for open source participation

Time-related barriers to access and exposure in open source include discretionary and unpaid time, time for onboarding, networking, and professional development, as well as time zones. 

  • Exclusionary behaviors can have a cascading effect on contributors’ experience and retention.

Exclusionary behavior has cascading effects on feelings of belonging, opportunities to participate, achieve leadership, and retention. While toxic experiences are generally infrequent, rejection of contributions, interpersonal tensions, stereotyping, and aggressive language are far more frequently experienced by certain groups (2-3 times higher frequency than the study average).

  • People’s backgrounds can impact equitable access to open source participation early in their careers, compounding representation in leadership later on.

Just 16 percent of students’ universities offer open source as part of their curricula. This, along with unreliable connectivity, geographic, economic, and professional disparities narrow an individual’s opportunity to contribute. 

“Understanding the state of Diversity, Equity, and Inclusion in the open source community is critical for business strategy and nurturing an inclusive culture,” said Demetris Cheatham, senior director, Diversity and Inclusion Strategy at GitHub. “This newest data, encompassing both qualitative and quantitative research from the Linux Foundation, helps direct our attention on the things that matter most to our employees and the great community and industry.”

The study also points to societal changes and trends that are impacting DEI in the workplace. Enterprise Digital Transformation, Techlash, Political Polarization, Social Media Ecosystem and Content Moderation are all cited as trends that have exposed and amplified exclusionary narratives and designs, mandating increased awareness, and recalibrating individual and organizational attention. 

To download the complete study, please visit: 

https://www.linuxfoundation.org/blog/addressing-diversity-equity-and-inclusion-in-2021-and-beyond/

For more information on the Linux Foundation’s DEI initiatives, please visit: https://www.linuxfoundation.org/diversity-inclusivity/

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer

503-867-2304

jennifer@storychangesculture.com

The post Linux Foundation Research Reveals New Open Source Diversity, Equity, and Inclusion Trends appeared first on Linux Foundation.

LFX Platform: An Update on Growing and Sustaining Open Source

Fri, 12/10/2021 - 03:00

Open source fuels the world’s innovation, yet building impactful, innovative, high-quality, and secure software at scale can be challenging when meeting the growing requirements of open source communities. Over the past two decades, we have learned that ecosystem building is complex. A solution was needed to help communities manage themselves with the proper toolsets in key functional domains.

From infrastructure to legal and compliance, from code security to marketing, our experience in project governance among communities within the Linux Foundation has accumulated years of expertise and proven best practices. As a result, we have spent the year productizing the LFX Platform, a suite of tools engineered to grow and sustain and grow the communities of today and build the communities of tomorrow. 

LFX: The Open Source Community Management Toolsuite for Continued Growth

The LFX Platform tools provide our members and projects with tools to support every stage of an open source project, from funding to community management to application security. LFX is built to support the needs of all community participants; maintainers, contributors, community managers, security professionals, marketers, and more.

  • Open source communities need access to better tools to scale.
  • Developers need to be able to make effective code contributions, scan for security vulnerabilities, and deploy.
  • Community managers need to facilitate meetings, host meet-ups online or in-person, support governing boards, and decide on proper governance structures.
  • Project leadership needs to be responsive, provide support, engage in training, and promote their latest developments. 

We aim to help reduce the complexity of building and managing open source ecosystems by delivering a new platform that brings people, information, tools, and supporting programs together.

We want to invite you to explore LFX. First, . Then jump into experiencing LFX elements such as your Individual Dashboard, Mentorship, EasyCLA, Insights, or Security. The LFX platform provides open source communities the following areas of key functionality:

LFX Platform Key Functional Areas LFX Platform: New Features and Capabilities

Global Trends and Compare Projects capabilities extend LFX insights with new reports and enable community members to easily answer common questions about their open source ecosystem or quickly compare open source communities to identify and drive best practices.

Global Trends and Compare Projects Dashboards

Security Vulnerabilities and Code Secrets Scanning, with Remediation powered by Snyk and BluBracket, is now available in LFX Security. Enabling communities to automatically scan code and detect potential vulnerabilities or exposed code secrets then recommend fixes to remediate the identified issues.

Security Vulnerabilities and Code Secrets Scanning with Remediation

Non-Inclusive Language Detection is now a part of LFX Security through integration with BluBracket, enabling the identification and elimination of non-inclusive language to attract and retain more participants and deliver on the power and promise of more diverse and inclusive open source communities.

Non-Inclusive Language Detection Console Tool Highlight: LFX Security

The world’s most critical infrastructure is built on open source, and therefore the security of open source software is essential. LFX Security builds on the Core Infrastructure Initiative and the Open Source Security Foundation and years of learned security best practices to provide communities with the capabilities required to secure their code continuously. LFX Security is powered by integrations with leading security vendors and supports existing tools and languages.

  • Automatic vulnerability scanning, with recommended fixes and inline remediation
  • Risk analysis with intuitive and informative scoring 
  • Automatic detection of potential code secrets
  • Identification of non-inclusive language in code 

Learn more about LFX Security at lfx.dev/tools/security

Tool Highlight: LFX Insights

Successful open source communities require effective management of everything from code quality and build to collaboration and marketing. But to manage them effectively, data has to be gathered across disparate repositories, tools, and activities. LFX Insights integrates data from source code repositories to issue trackers, social media platforms to mailing lists and contextualizes projects, project groups, or the entire Linux Foundation ecosystem.

Learn more about LFX Insights at lfx.dev/tools/insights

The LFX platform is designed to address these issues and more. LFX aggregates dozens of data sources and commonly used management. It provides visualization tools with an added layer of intelligence to reveal best practices for numerous open source stakeholders, including developers, project leaders, open source program offices, legal, operations, and even marketing. 

LFX is a suite of elements engineered to grow and sustain and grow the communities of today and build the communities of tomorrow. By automating and consolidating many of the most critical activities needed by open source projects and stakeholders, we hope to reduce complexities that sometimes hinder innovation and progress. 

The LFX platform provides our members and project with tools to support every stage of an open source project. As we head into 2022, we plan to release even more functionality to support our growing community.

and Explore LFX at lfx.linuxfoundation.org

The post LFX Platform: An Update on Growing and Sustaining Open Source appeared first on Linux Foundation.

Facing Economic Challenges‭: ‬Open Source Opportunities are Strong During Times of Crisis

Fri, 12/10/2021 - 00:00

Our recently published Open Source Jobs Report examined the demand for open source talent and trends among open source professionals. What did we find?

Open Source Career Opportunities are Strong

The good news is that hiring is rebounding in the wake of the pandemic, as organizations look to continue their investments in digital transformation. This is evidenced by 50% of employers surveyed who stated they are increasing hires this year. There are significant challenges though, with 92% of managers having difficulty finding enough talent and struggling to hold onto existing talent in the face of fierce competition. Other key findings from this year’s report included:

  • Cloud is on the rise. Cloud and container technology skills are most in-demand by hiring managers, surpassing Linux for the first time, with 46% of hiring managers seeking cloud talent.
  • DevOps has become the standard method for developing software. Virtually all open source professionals (88%) report using DevOps practices in their work, a 50% increase from three years ago.
  • Demand for certified talent is spiking. Managers are prioritizing hires of certified talent (88%).
  • Training is increasingly helping close skills gaps. 92% of managers report increasing requests for training. Employers also report that they prioritize training investments to close skills gaps, with 58% using this tactic.
  • Discrimination is a growing concern in the community. Open source professionals having been discriminated against or made to feel unwelcome in the community increased to 18% in 2021 — a 125% increase over the past three years.
Enabling Training and Certification

This year, ‬vendor-neutral training and certification grew in importance as demand for professionals with critical skills in open cloud technologies and DevOps increased‭.‬ Over 2 million individuals have enrolled in free Linux Foundation training courses, providing them a great way to explore different open source technologies and decide which is the best fit for them; this includes over a million students who have enrolled in our Introduction to Linux course on the edX platform. To date, over 50,000 individuals have been certified for their technical competence through Linux Foundation programs.

This year, our Training & Certification team launched over 20 new offerings. We now host over 70 eLearning courses, deliver over 20 instructor-led courses, and offer more than a dozen certification exams that enable certified professionals to demonstrate their skills, with more being released regularly. 

This year saw the addition of exam simulators to our Kubernetes certification exams, enabling exam registrants to familiarize themselves with the exam environment before sitting for their exam. In late 2021, we will launch a new Kubernetes and Cloud Native Associate certification exam, which will serve as an entry-level certification for new cloud professionals.

In 2021, The Linux Foundation directly awarded 500 scholarships for free training and certification to individuals worldwide. Hundreds more were awarded via partnerships with nonprofits, including Blacks in Technology, TransTech Social Enterprises, and Women Who Code.

New training and certification offerings launched in 2021 include:

  • Building a RISC-V CPU Core
  • Certified Kubernetes and Cloud Native Associate (KCNA)
  • Certified TARS Application Developer (CTAD)
  • FinOps for Engineering
  • Generating a Software Bill of Materials
  • GitOps: Continuous Delivery on Kubernetes with Flux
  • Hyperledger Besu Essentials:
  • Creating a Private Blockchain Network
  • Kubernetes and Cloud Native Essentials
  • Kubernetes Security Essentials
  • Kubernetes Security Fundamentals
  • Implementing DevSecOps
  • Introduction to Cloud Foundry
  • Introduction to FDC3 Standard
  • Introduction to GitOps
  • Introduction to Kubernetes on Edge with K3s
  • Introduction to Magma:
  • Cloud Native Wireless Networking
  • Introduction to Node.js
  • Introduction to RISC-V
  • Introduction to WebAssembly
  • Open Source Management and Strategy
  • RISC-V Toolchain and Compiler Optimization
  • Techniques
  • WebAssembly Actors: From Cloud to Edge

Explore the full catalog of courses at training.linuxfoundation.org/full-catalog.

The post Facing Economic Challenges‭: ‬Open Source Opportunities are Strong During Times of Crisis appeared first on Linux Foundation.

Linux Foundation to Host the Cloud Hypervisor Project, Creating a Performant, Lightweight Virtual Machine Monitor for Modern Cloud Workloads

Wed, 12/08/2021 - 22:00

Small in footprint and written in Rust, the Cloud Hypervisor project moves the needle for datacenter workload operations.

SAN FRANCISCO, Calif., December 8, 2021 -The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Cloud Hypervisor project, which delivers a Virtual Machine Monitor for modern Cloud workloads. Written in Rust with a strong focus on security, features include CPU, memory and device hot plug; support for running Windows and Linux guests; device offload with vhost-user; and a minimal and compact footprint.

The project is supported by Alibaba, ARM, ByteDance, Intel and Microsoft and represented by founding member constituents that include Arjan van de Ven, Fellow at Intel; K. Y Srinivasan, Distinguished Engineer and VP at Microsoft; Michael Zhao, Staff Engineer at ARM; Gerry Liu, Senior Staff Engineer at Alibaba; and Felix Zhang, Senior Software Engineer at ByteDance. Initial focus for the Cloud Hypervisor project will be security and modern operation for Cloud.

“Cloud Hypervisor has grown to the point of moving to the neutral governance of The Linux Foundation,” said Arjan van de Ven, Intel Fellow and founding technical sponsor for the project. “We created the project to provide a more secure and updated VMM to optimize for modern cloud workloads. With fewer device models and a modern, more secure language, Cloud Hypervisor offers security and performance optimized for today’s cloud needs.”

“Modern cloud workloads require better security, and the Cloud Hypervisor project is intentionally designed to focus on this critical area,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “We’re looking forward to supporting this project community, both as it begins to build and to put the proper governance structures in place to sustain it for years to come.”

K.Y Srinivasan, Advisory Board member from Microsoft adds:

“Cloud Hypervisor has matured to the point that moving it to the Linux Foundation is the right move at the right time. As LF continues to standardize key components of the software stack for managing/orchestrating modern workloads, we feel that the Cloud Hypervisor will be an important part of the overall stack. Being part of LF will help us accelerate development and adoption of this key technology.”

To get involved, please visit https://www.cloudhypervisor.org or see us at the Linux Foundation at www.linuxfoundation.org/cloudhypervisor 

Additional Supporting Comments

Alibaba

Cloud Hypervisor is a great innovation project and evolves rapidly. Moving it to Linux Foundation will help to build a stronger community and speed up the adoption,” said Jiang “Gerry” Liu, Alibaba.

ARM

“Joining a foundation would be quite beneficial for the future development of Cloud Hypervisor. Compared to other similar foundations, Linux Foundation is the best choice to join,” said Michael Zhao at ARM.

ByteDance

“Cloud Hypervisor helps us build a more secure and lightweight cloud infrastructure. Joining the Linux Foundation can make more developers and organizations benefit from this technology,” said Yu “Felix” Zhang, ByteDance.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 ###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer

Story Changes Culture

503-867-2304

jennifer@storychangesculture.com

The post Linux Foundation to Host the Cloud Hypervisor Project, Creating a Performant, Lightweight Virtual Machine Monitor for Modern Cloud Workloads appeared first on Linux Foundation.

A 2021 Linux Foundation Update from the‭ ‬Executive Director

Wed, 12/08/2021 - 00:00

In 2021 the Linux Foundation (“LF”) emerged from the worst pandemic in a century and embraced new horizons. The collaborative activities in our project communities weathered the COVID-19 crisis exceptionally well, and many communities are now pushing forward with a renewed sense of purpose. 

Jim Zemlin

Our organization’s namesake project, the Linux kernel, has celebrated an amazing milestone: its 30th birthday. Over the years, more than 55,000 people have contributed code to improve Linux, and today, Linux can be found everywhere. Over 5.4 billion people rely on Linux as it powers the vast majority of smartphones, the world’s largest cloud environments, and the world’s fastest computers. It’s also assisting in scientific discovery on Mars. After three decades of development, the project continues to ship new code, features, and performance enhancements. 

While our community continues to accelerate innovation in software development, the rising tide of cybersecurity threats has planted itself firmly on our shores. We all rely on software supply chains that are constantly under attack by an increasingly sophisticated adversary, causing us to reflect on our role and responsibility in securing the world’s critical technology infrastructure. 

In 2021 we saw much progress in our quest to “harden” the software supply chain. The Software Package Data Exchange® (SPDX®) community received formal recognition as an international ISO/IEC standard (5962:2021), making it easier for organizations to require a Software Bill of Materials (SBOM) with suppliers and customers. This came on the heels of OpenChain receiving ISO/IEC approval as an international standard (5230:2020) for open source licensing compliance. We also saw new collaborations emerge this year, like sigstore, which is on its way to becoming a de facto standard for signing packages and digital artifacts used throughout a supply chain.

The Open Source Security Foundation (OpenSSF), launched in August 2020, brought together a community of experts focused on software supply chain security challenges. This community had an amazing start publishing guidance for best practices (e.g., badges and scorecards), creating new tools and frameworks (e.g., SLSA), establishing and collecting metrics, developing free, globally accessible training materials, and publishing research, such as the findings of its FOSS Contributor Survey in collaboration with Harvard’s Laboratory for Innovation Science. 

Our members responded to the progress by doubling down and making significant additional investments in OpenSSF as a vehicle for solving the world’s supply chain security challenges. In October, we announced that the Linux Foundation and OpenSSF raised over $10 million to invest in leadership and initiatives, boldly aspiring to impact supply chain security dramatically. The LF could not have done this without significant support from our members, including OpenSSF’s premier members 1Password, AWS, Cisco, Citi, Dell Technologies, Ericsson, Meta, Fidelity, GitHub, Google, Huawei, Intel, IBM, JP Morgan Chase, Microsoft, Morgan Stanley, Oracle, Red Hat, Snyk, and VMWare.

The importance of open source in the world’s cybersecurity efforts highlights its importance to our modern society. As new organizations, new industries, and policymakers have approached the LF for guidance on open source, we recognize there is a need for modern insights into why and how open collaboration works. There is a need to understand the dynamics of communities, where and how value is derived, and the intersection of supply chains and open source collaboration. To that end, this year, we launched Linux Foundation Research to explore the role of open source software, standards, and communities as a framework for mass innovation, collaboration, and problem-solving. 

Research into important topics such as cybersecurity and SBOM readiness is already underway, along with project-specific insights sought by our project communities. We think this investment will provide actionable data and insights supporting more informed decision-making across technology and industry ecosystems. Finally, while most research organizations hoard data privately, our research approach has an open flair — we’re making all non-personally identifiable data available under the Community Data License Agreement — Permissive, Version 2.0, a revised data-sharing framework our legal community worked to release this year.

Having a research capability also provides new opportunities to more deeply explore challenges and opportunities in community collaboration. For example, this year LF Research partnered with AWS, CHAOSS, Comcast, Fujitsu, GitHub, GitLab, Hitachi, Huawei, Intel, NEC, Panasonic, Renesas, Panasonic, Red Hat, and VMware to examine the state of diversity, equity, and inclusion (DEI) in open source communities. To nurture and grow open source, we need to understand better how DEI is practiced and encouraged in open source communities. We hope this research will also support other collaborative efforts supporting DEI goals, such as the Inclusive Naming Initiative, the Software Developer Diversity and Inclusion Project (SDDI), Fair Change, and Open Sentencing.

And with our industry partners, such as Microsoft and Accenture, we’ve launched several new projects and foundations that are meaningful to humanity. The Green Software Foundation seeks to add sustainability to software engineering efforts. The AgStack Foundation, launched in May 2021, is building an open source digital infrastructure for agriculture to accelerate that industry’s digital transformation and address climate change.

While open source drove innovation across the technology landscape, it also saw acceleration within industry verticals. The LF helped launch several new collaborations focused on driving 5G and telecommunications, including the 5G Super Blueprint, a partnership with Next Generation Mobile Network Alliance (NGMN), Magma Foundation, and the new Mobile Native Foundation. Our members also expanded open source innovation in the media and entertainment industry with the launch of Open 3D Engine (O3DE), a new open source AAA 3D engine for gaming, simulation, and storytelling. The O3DE ecosystem complements our existing Academy Software Foundation (ASWF). ASWF’s community added a new project for shading materials in graphics this year called MarterialX. Moviegoers may have experienced the effects of this project in Star Wars: The Force Awakens.

Our project communities’ ambitions often lead to a focus on building communities. We’ve seen many experts continue to collaborate on community engagement in the highly active TODO Group. However, there comes a time when our communities need tools to help scale and support their growth. In 2020, the LF embarked on a journey with key community leaders to build tools that enable those leaders and others to better understand and more effectively engage with a project community. The results of these investments are now starting to roll out as the LFX platform. I’d like to thank all those in our community who provided feedback, guidance, suggestions, and sometimes the raw critiques we needed to build something better. 

We started with tools we knew would make maintainers more efficient on tasks they really did not want to spend time on, such as processing Contributor License Agreements (CLAs) electronically in EasyCLA. Many maintainers were also interested in understanding their community dynamics leading to the creation of LFX Insights, which aggregates, analyzes, and contextualizes data across all of a community’s repositories, communication channels, and contributors. Conversations about community health led to requests for tools to recruit and engage new project participants, particularly from diverse sources, and LFX Mentorship was born. Once engineers on our projects saw what LFX could do, they requested additional capabilities to configure and manage their projects. LFX Project Control Center now promises to enable engineers to provision and configure resources online in minutes with API-driven automation for common open source project tasks such as provisioning new cloud resources, managing DNS, and more. 

The LF also heard the needs of our corporate members to have better visibility into how their organization is engaged in our communities. We’ve developed the LFX MyOrg tool to help corporate managers get a better view across their organization’s participation, find paths to collaborating in projects, exercise the benefits available to them as members, and more — all from a single system. All of these tools are now available to our communities and members through lfx.linuxfoundation.org.

Many of our members have been faced with a skills shortage. The LF’s 2021 Jobs Report, released in October with edX, shows trained and certified open source professionals, particularly with cloud and container expertise, are in high demand and are in short supply. Such data points highlight the need to train people and enable new opportunities to grow their careers in open source. Our training and certification efforts continued to gain steam this year. Over 68,00 individuals registered for new certifications in the past year, a 50% increase over 2020, while 2 million people enrolled in the LF’s free training courses. 

And finally, I’ll wrap up by saying we sincerely missed seeing our communities in person. The last two years have been difficult — to harrowing — for many suffering from the lingering pandemic. However, this year we have seen hope on the horizon. We produced dozens of successful virtual conferences throughout 2021, but the feedback was clear: people wanted to meet in person again. Our events team did a thorough job researching and soliciting advice from experts and public health authorities. That preparation enabled us to welcome our communities back together, in-person, this fall at events like Open Source Summit in Seattle, Open Source Strategy Forum and OSPOCon Europe in London, and KubeCon+CloudNativeCon North America in Los Angeles, the latter of which gathered over 3,000 community members in person. These events would not have been possible without our commitment to attendee safety by requiring vaccinations and using vaccine verification technologies, diligent on-site health checks, and strict enforcement of the use of masks and social distancing protocols. With borders opening up shortly, we are ecstatic to see even more of our community, live and in-person, again in 2022.

On behalf of the entire Linux Foundation team, I congratulate our communities for their exceptional outcomes under another extraordinarily challenging year and wish all of you a happy and prosperous 2022, when I hope we get to see you in person once again.

Jim Zemlin
Executive Director,
The Linux Foundation

These efforts are made possible by our members. To learn how your organization can get involved with the Linux Foundation, click here.

The post A 2021 Linux Foundation Update from the‭ ‬Executive Director appeared first on Linux Foundation.

The Cyber-Investigation Analysis Standard Expression Transitions to Linux Foundation

Tue, 12/07/2021 - 22:26

SAN FRANCISCO, Calif., December 7, 2021— The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the Cyber-investigation Analysis Standard Expression (CASE) is becoming a community project as part of the ​​Cyber Domain Ontology (CDO) project under the Linux Foundation. CASE is an ontology-based specification that supports automated combination and intelligent analysis of cyber-investigation information. CASE concentrates on advancing interoperability and analytics across a broad range of cyber-investigation domains, including digital forensics and incident response (DFIR).

“Becoming part of the Linux Foundation is a major milestone for CASE that will significantly benefit the broader open source and cyber-investigation communities,” said Eoghan Casey, Presiding Director of CASE. “As an evolving standard supporting structured expression and exchange of cyber-investigation information, CASE will substantially enhance efforts to address growing challenges in the modern world, including cyberattacks, ransomware, online fraud, sexual exploitation, and terrorism. Our objective is to create a culture of common comprehension and collaborative problem solving across cyber-investigation domains.”

Organizations involved in joint operations or intrusion investigations can efficiently and consistently exchange information in standard format with CASE, breaking down data silos and increasing visibility across all information sources. Tools that support CASE facilitate correlation of differing data sources and exploration of investigative questions, giving analysts a more comprehensive and cohesive view of available information, opening new opportunities for searching, pivoting, contextual analysis, pattern recognition, machine learning and visualization.

Development of CASE began in 2014 as a collaboration between the DoD Cyber Crime Center (DC3) and MITRE, led by Dr. Eoghan Casey and Sean Barnum, involving the National Institute of Standards and Technology (NIST). In response to international interest, this initiative became an open source evolving standard, with hundreds of participants in industry, government and academia around the globe.

Early contributors include the Netherlands Forensic Institute (NFI), the Italian Institute of Legal Informatics and Judicial Systems (IGSG-CNR), FireEye, and University of Lausanne. CASE governance and community coordination were formalized with support of Harm van Beek, Rich Brown, Ryan Griffith, Cory Hall, Christopher Hargreaves, Jessica Hyde, Deborah Nichols, and Martin Westman. Growing international involvement is tracked on the CASE website: https://caseontology.org/community/members.html

The Technical Director is Alex Nelson, and the Ontology Committee is led by Paul Brandt. The Adoption Committee brings together developers from diverse backgrounds to share experiences and battle test ontologies. The success of these efforts depends on members of the community actively contributing to CASE development and implementation. The project welcomes anyone interested in elevating cyber-investigation capabilities to strengthen evidence-based decision making in any context, including court, boardroom, and battlefield.

CASE, built on the Hansken trace model developed and implemented by the NFI, aligns with and extends the Unified Cyber Ontology (UCO). This year has seen the release of UCO 0.7.0, and most recently CASE 0.5.0. CASE and UCO now both are built on SHACL constraints, providing an instance data validation capability. Currently, CASE is developing a representation for Inferences, both human formulated and computer generated, to bind investigative conclusions to supporting evidence and associated chain of custody.

The CASE community has multiple collaborative repositories and activities, including translators for common digital forensic tool outputs as well as mapping CASE to the W3C provenance ontology (PROV-O). CASE uses the Apache-2.0 license.

Organizations and individuals interested in contributing to CASE can go to https://caseontology.org/

Supporting Comments

Hexordia

“The news that CASE will be transitioning to The Linux Foundation is an exciting move for the Digital Forensics, Incident Response, and Cyber Security communities,” said Jessica Hyde, founder of Hexordia. “One of the special things about CASE is that it has been developed to specifically support cyber investigations by those who understand the domain from a variety of sectors including academia, law enforcement, government, non-profits, and commercial entities. This uniquely positions CASE to describe the provenance, metadata, and data recovered in a multitude of environments and allow different organizations and a variety of tools to look at data with the same definitions of what the data is describing. What an exciting day for uncovering truth in data and ensuring common definitions of data as it moves through the nexus of tools, organizations, and jurisdictions that need to work together in today’s cyber investigations.”

IGSG-CNR

“The CASE transition to the Linux Foundation is remarkable news and encourages widespread use of this standard in a broad range of cyber-investigation domains to foster

interoperability, establish authenticity, and advance analysis,” said Fabrizio Turchi, senior

technologist at the IGSG-CNR, Italian National Research Council. “The European EXEC-II project includes a bespoke application for packaging evidence with metadata in CASE format for automated exchange, while maintaining provenance information to streamline cross-border cooperation among judicial authorities in the EU member states. In addition to searching for specific keywords or characteristics within a single case or across multiple cases, having a structured representation of cyber-investigation information allows more sophisticated processing such as data mining, machine learning and natural language processing techniques as in the European INSPECTr project and a shared intelligent platform for gathering, analysing and presenting key data to help predict, detect and manage crime in support of multiple law enforcement agencies.”

MITRE

“The MITRE Corporation is proud to see the continued growth and acceptance of the Cyber-investigation Analysis Standard Expression (CASE) open source project. MITRE is one of several organizations that helped create CASE and bring together the initial community of contributors,” said Cory Hall, principal cybersecurity engineer at MITRE. “With the transition of CASE to the Linux Foundation we see a bright future for the effort as the community advances this project to benefit digital investigators everywhere. The MITRE Corporation expects to continue contributing to this effort for years to come.”

MSAB

“As a long-term member of the CASE open source project, MSAB looks forward to the new possibilities that Linux Foundation will provide for CASE as the de facto standard for adoption by digital forensic tools. MSAB is preparing to implement CASE on our XRY and XAMN solutions to enable our products to seamlessly interact with tools from other vendors, academia, nonprofit organizations, and enthusiasts alike. With the common data exchange platform that CASE provides, our industry can process greater volumes of data faster, more accurately and with greater interoperability than ever before. We are committed to continuing to develop CASE under the Linux Foundation and are excited for the future of the project,” said Martin Westman, exploit research manager, MSAB.

Netherlands Forensic Institute

“CASE is the solid foundation for interconnecting digital forensic tools and combining their results to come to new insights. This is paramount not only for the NFI, but for the entire community to quickly apply science to day-to-day operations to fight crime,” said Harm van Beek, senior digital forensic scientist at the Netherlands Forensic Institute (NFI). “We support CASE and the digital forensic community by implementing and extending the standard in Hansken, our open digital forensic platform.”

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer

503-867-2304

jennifer@storychangesculture.com

The post The Cyber-Investigation Analysis Standard Expression Transitions to Linux Foundation appeared first on Linux Foundation.

Hyperledger Foundation 2021 End-of-Year Update

Sat, 12/04/2021 - 01:00

In 2021, after six years of community building and expanding from two projects to 18 projects, to over 50 labs, 16 Special Interest and Working Groups, and over 200 members, Hyperledger became a Foundation. 

This newfound identity arches over all of its projects, labs, regional chapters, and community groups. Hyperledger Foundation is now leading the collective effort to advance enterprise blockchain technology and fulfill its mission to foster and coordinate the premier open source enterprise blockchain community.

At Hyperledger Foundation, being open is core to what we do. We’re here to lead an open, global and welcoming enterprise blockchain ecosystem—a community where no contribution is seen as too small or insignificant. Our foundation comprises organizations, developers, executives, students, teachers, government leaders, and more. It’s supported by the Technical Steering Committee, various working groups, special interest groups, and Meetup communities all across the globe, now numbering more than 80,000 participants. 

According to LFXInsights, there has been a 53% growth in the total commits in the last three years, and new code contributors increased by 37%. A total of 366 organizations from both large and small companies have made code commits since 2016. And the pace of activity among new community members is accelerating as commits by new contributors have increased by 286% in the last year.

Some of the largest and most important production enterprise blockchain projects today are built using Hyperledger technologies. They include:

  • Supply chain networks, like IBM and Walmart’s Food Trust (Hyperledger Fabric)
  • Circulor’s mine to manufacturer traceability of a conflict-mineral for automobile sustainable supply chains (Hyperledger Fabric
  • Top trade finance platforms such as TradeLens (Hyperledger Fabric), which has more than 300 orgs, across 600 ports and terminals and has tracked over 42 million container shipments, with close to 2.2 billion events 
  • we.trade, who have already onboarded 16 banks across 15 countries to join their blockchain-enabled trade finance platform (Hyperledger Fabric)

Over 13 Central Bank Digital Currency production and pilots using multiple Hyperledger projects have been identified this year alone.

With this transition, Hyperledger Foundation also gained new leadership with the appointment of Daniela Barbosa as its new Executive Director. Barbosa is a seasoned veteran of the open source community with over 20 years of enterprise technology experience, including previously serving as Hyperledger’s Vice President of Worldwide Alliances, where she was responsible for the project’s community outreach and overall network growth.

New Growth in Hyperledger Technologies 

According to research from Blockdata, Hyperledger Fabric is used by more of the top 100 public companies in the world than any other blockchain platform. 

Hyperledger-based networks are used by some of the largest corporations around the world, including more than half of the companies on the Forbes Blockchain 50, a list of companies with revenue or a valuation of at least $1 billion that lead in employing distributed ledger technology.

As an ever-growing library of case studies shows, Hyperledger technologies are already transforming many market spaces, including supply chains, trade finance, and healthcare. Hyperledger technologies are used in everything from powering global trade networks and supply chains to fighting counterfeit drugs, banking “unbanked” populations, and ensuring sustainable manufacturing. 

In addition, Hyperledger technologies are being applied to a number of new markets and business models. These include digital identity and payments, Central Bank Digital Currencies (CBDCs), and NFTs like Damien Hirst’s The Currency project and DC Comics powered by Palm NFT with a near-zero carbon footprint using Hyperledger Besu.

Digital Identity 

Hyperledger technologies are being adopted to put individuals in charge of their own identity. People often need to verify their status, prove a birthdate, board a plane, comply with vaccine mandates, prove their education, or access money. Leveraging Hyperledger Aries and Hyperledger Indy, organizations worldwide are reshaping how digital information is managed and verified to increase online trust and privacy. These digital identity solutions create verified credentials that are effective, secure, accessible, and privacy-preserving. 

  • The Aruba Health App makes it easy for visitors who have provided required health tests to the Aruba government to share a trusted traveler credential — based on their health status — privately and securely on their mobile device. Launched initially as a trial, the Aruba Health App is built using Cardea, an open-source code base that has since been contributed to the Linux Foundation Public Health (LFPH) project. Cardea leverages Hyperledger Indy, Hyperledger Aries, and Hyperledger Ursa.
  • IDUnion addresses the demand for migrating centralized identity systems towards decentralized self-sovereign management of digital identities for people, organizations, and machines. The service has 39 cross-sector partners building production-level infrastructure to verify identity data in finance, manufacturing, the public sector, and healthcare. IDunion has launched a Hyperledger Indy test network, built components for allocating, verifying, managing digital identities, and more. This consortium includes Hyperledger member companies Siemens, Bosch, Deutsche Telecom, and others.
  • The International Air Transport Association IATA Travel Pass, built in partnership with Evernym using Hyperledger Indy and Hyperledger Aries, is a mobile app that helps travelers store and manage their verified certifications for COVID-19 tests or vaccines. 
  • MemberPass, built on Hyperledger Indy by Bonifii, is the first global digital identity ecosystem for credit unions and their members. It provides consumer identity while protecting personal information. Adopted by more than seven credit unions and counting, 20,000+ credentials issued. 
Digital Currency

Blockchain technology has already helped rewrite some of the rules for currencies and payments. Governments worldwide are now moving towards Central Bank Digital Currencies (CBDCs) or digital forms of their official currency. These will give central banks a more flexible, more secure form of their national currencies and lower the risks from alternative cryptocurrencies. Backed by a central bank, any CBDC, whether developed for wholesale or retail use, will be legal tender with the stability that regulation confers.

Governments are moving carefully, but many of the early projects are using Hyperledger platforms. The goals range from modernizing payment processes to removing barriers and costs associated with back-end settlement to boosting financial inclusion.

This fireside chat from Hyperledger Global Forum on CBDCs by experts from Accenture and DTTC offers a great overview of the benefits and different approaches to these new currencies and a look at the current landscape of CBDC research and experimentation across the globe.

  • The Eastern Caribbean Central Bank launched DCash, built on Hyperledger Fabric, as a mobile phone app for person-to-person and merchant payments. ECCB stated at an OECD event in 2020 that it selected Hyperledger Fabric because of its strong security architecture (a private permissioned blockchain with strong identity management) and open source code, contributing to its security, flexibility, and scalability, among other desired attributes.
  • The National Bank of Cambodia created Bakong, a fiat-backed digital currency, using Hyperledger Iroha to promote its national currency use, giving the large percentage of its population without bank accounts a mobile payment system and cutting costs for interbank transfers.
  • Additionally, a mix of retail and wholesale CBDCs trials using Hyperledger Besu has helped several other countries, including Thailand and Spain, to advance planning for new digital fiat currencies.

These efforts are made possible by the hundreds of enterprises that support the Hyperledger Foundation. To learn how your organization can get involved, click here

The post Hyperledger Foundation 2021 End-of-Year Update appeared first on Linux Foundation.

State of FinOps Survey 2022: Built by and for the FinOps Community

Fri, 12/03/2021 - 03:00

The FinOps Foundation team is beyond excited to launch the 2022 State of FinOps Survey. Yes, there are plenty of self-published industry reports out there, but what makes this one different is that it’s built by and for the FinOps community.

Why do we create the State of FinOps each year?

FinOps, the operating model for cloud finance management, is a fundamental practice for organizations leveraging the cloud to align those costs with business value and outcomes. The FinOps Foundation community represents a broad spectrum of practitioners, including many leaders and forerunners in the space. Annual surveys help gather a snapshot of the current activities and perspectives across the community to deepen the understanding and surface trends. 

The results of each State of FinOps Survey become a report that delivers insights and benchmarks that helps us inform the roadmap of how the Foundation can improve the educational materials to advance practitioners and their practices. The more we understand how our community and practitioners are growing, maturing their practices, and the challenges they are struggling with, the richer the community projects can support everyone.

Evolving from the previous year

The first State of FinOps Survey and Report was released in 2021, creating a report template, data visualization style, and a first test at how our information and insights would help the community. We found success in gaining constructive analyst, press, and community feedback. 

In our first year:

  • We created the industry’s first community-focused and led survey and report on the FinOps discipline
  • Community members held us accountable for achieving key outcomes that we promised would be built from the report’s insights
  • We strengthened our FinOps Framework by adding user-generated projects and stories by practitioners of various skill levels and from all types of organizations across the world

For the 2022 report, we focused on ways to incorporate even more practitioner and leadership feedback from the beginning. We also made a significant investment into the academic and data integrity of the report.

As FinOps practitioners and leaders worldwide look to this resource as a means of guiding and building their practices, we needed to ensure that the body of work contained a blend of academic merit and data-driven depth.

Doubling down on community and practitioner involvement

We created several working groups of staff and FinOps practitioners to help us build a better survey and report for 2022. These groups looked at the 2021 report and gave us constructive feedback to help us create a better asset and resource for the community.

“By refining the survey for 2022 on community feedback, it can be used for multiple areas and projects by the community in the coming year – it will be exciting to understand all the different perspectives in the FinOps category.” Joe Daly, Director of Community, FinOps Foundation

Leveraging Linux Foundation’s research team

A majority of the FinOps Foundation staff have FinOps experience, but we were honest with ourselves about needing more data analysis help with this year’s survey and report. Fortunately, we were able to utilize the expertise of the Linux Foundation’s newly established Research Team.

The team was with us from the outset, where they integrated with FinOps experts so that they could understand more about our community-centric approach.

“Designing the State of FinOps 2022 survey was a truly collaborative effort. It was clear from the beginning that establishing a Working Group to aid in the survey instrument’s design was necessary to generate the kind of data that would add value across the FinOps ecosystem.” Stephen Hendrick, VP Research

With LF Research’s help and support, we also decided to translate the 2022 survey to engage FinOps practitioners in French-speaking regions, who represent a significant demographic of our community. LF Research helped to achieve the French language translation as a new element in this year’s research effort to make the survey more accessible and inclusive.

We are very thankful for their guidance in structuring our survey and look forward to their expertise once we start analyzing results and building the 2022 report.

Building a long-lasting resource for our community

We learned a lot of lessons from the 2021 survey and report. One of the biggest lessons was an internal one in that this survey collects such a variety of information and data. It informed us that we could go one of two ways with this research tool: keep building one-off reports, or do the work and build something long-term for the community.

Our community leaders advised us that we needed to focus more on generating annual benchmarking and insights based on key practices. They also helped us iron out the method and approach to our questions to align more with the framework to get the best data possible from the survey.

Our goal is to have something more than another data report to add to the Internet. We want to create a valuable tool for FinOps practitioners and partners to improve their practice. We want this tool to be informed and built by the community, for the community.

Ideal outcomes from the 2022 survey

With the survey into its first weeks of collecting data, we’re very interested in measuring and understanding the following:

  • Are practitioners maturing their FinOps practices? What FinOps “maturity level” do they self-identify as?
  • What phase in the FinOps lifecycle are practitioners operating for specific capabilities, how did they get there, and what are they planning to do next?
  • What are the benchmarks practitioners use for FinOps capabilities?
  • How do practitioners measure their success when implementing their FinOps capabilities?


We’re looking forward to seeing how the results inform our hypotheses and questions.

Building upon this report with open source standards

When done right, it turns out you can use open source software standards to encourage contribution and community even with a topic like cloud financial management. We’re very proud to find a way to work closely with our community while championing Linux Foundation open source principles.

Do you know someone who qualifies in taking the State of FinOps Survey? If so, feel free to share it with them. The survey is open, and we look forward to learning more about the FinOps community and industry to help strengthen it.

The post State of FinOps Survey 2022: Built by and for the FinOps Community appeared first on Linux Foundation.

Pages