Open-source News

Ubuntu Desktop Exploring Microsoft Azure AD Integration

Phoronix - Fri, 05/20/2022 - 18:22
Since Ubuntu 20.10 there has been Active Directory integration in the Ubiquity installer while now it looks like the latest effort by Canonical on enhancing the Ubuntu desktop for the enterprise is around Microsoft Azure Active Directory (Azure AD) integration...

Linux Patches Updated For Better Power Management On AMX "Sapphire Rapids" Servers

Phoronix - Fri, 05/20/2022 - 17:54
While the kernel-side Intel AMX support landed in Linux 5.16 and KVM support for AMX in Linux 5.17, other Linux patches around Advanced Matrix Extensions (AMX) remain floating around. One important patch-set was updated this week for ensuring proper power management on AMX-enabled processors, coming with Xeon Scalable "Sapphire Rapids" this year...

XWayland Adds New Option To Expose Dummy Modes For Gamescope / Steam Deck

Phoronix - Fri, 05/20/2022 - 17:27
Merged yesterday to the mainline X.Org Server for XWayland is the "-force-xrandr-emulation" option added for Valve's Gamescope / Steam Deck usage...

AOMP 15.0-2 Released For Radeon OpenMP Compiler

Phoronix - Fri, 05/20/2022 - 17:00
AMD has released a new version of AOMP, its LLVM/Clang compiler downstream where they stage their latest patches focused on OpenMP GPU offload support for their Radeon graphics cards / Instinct accelerators...

How to rename a branch, delete a branch, and find the author of a branch in Git

opensource.com - Fri, 05/20/2022 - 15:00
How to rename a branch, delete a branch, and find the author of a branch in Git Agil Antony Fri, 05/20/2022 - 03:00 Register or Login to like Register or Login to like

One of Git's primary strengths is its ability to "fork" work into different branches.

If you're the only person using a repository, the benefits are modest, but once you start working with many other contributors, branching is essential. Git's branching mechanism allows multiple people to work on a project, and even on the same file, at the same time. Users can introduce different features, independent of one another, and then merge the changes back to a main branch later. A branch created specifically for one purpose, such as adding a new feature or fixing a known bug, is sometimes called a topic branch.

Once you start working with branches, it's helpful to know how to manage them. Here are the most common tasks developers do with Git branches in the real world.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Rename a branch using Git

Renaming a topic branch is useful if you have named a branch incorrectly or you want to use the same branch to switch between different bugs or tasks after merging the content into the main branch.

Rename a local branch

1. Rename the local branch:

$ git branch -m <old_branch_name> <new_branch_name>

Of course, this only renames your copy of the branch. If the branch exists on the remote Git server, continue to the next steps.

2. Push the new branch to create a new remote branch:

$ git push origin <new_branch_name>

3. Delete the old remote branch:

$ git push origin -d -f <old_branch_name>Rename the current branch

When the branch you want to rename is your current branch, you don't need to specify the existing branch name.

1. Rename the current branch:

$ git branch -m <new_branch_name>

2. Push the new branch to create a new remote branch:

$ git push origin <new_branch_name>

3. Delete the old remote branch:

$ git push origin -d -f <old_branch_name>Delete local and remote branches using Git

As part of good repository hygiene, it's often recommended that you delete a branch after ensuring you have merged the content into the main branch.

Delete a local branch

Deleting a local branch only deletes the copy of that branch that exists on your system. If the branch has already been pushed to the remote repository, it remains available to everyone working with the repo.

1. Checkout the central branch of your repository (such as main or master):

$ git checkout <central_branch_name>

2. List all the branches (local as well as remote):

$ git branch -a

3. Delete the local branch:

$ git branch -d <name_of_the_branch>

To remove all your local topic branches and retain only the main branch:

$ git branch | grep -v main | xargs git branch -dDelete a remote branch

Deleting a remote branch only deletes the copy of that branch that exists on the remote server. Should you decide that you didn't want to delete the branch after all, you can re-push it to the remote, such as GitHub, as long as you still have your local copy.

1. Checkout the central branch of your repository (usually main or master):

$ git checkout <central_branch_name>

2. List all branches (local as well as remote):

$ git branch -a

3. Delete the remote branch:

$ git push origin -d <name_of_the_branch>Find the author of a remote topic branch using Git

If you are the repository manager, you might need to do this so you can inform the author of an unused branch that it should be deleted.

1. Checkout the central branch of your repository (such as main or master):

$ git checkout <central_branch_name>

2. Delete branch references to remote branches that do not exist:

$ git remote prune origin

3. List the author of all the remote topic branches in the repository, using the --format option along with special selectors (in this example, %(authorname) and %(refname) for author and branch name) to print just the information you want:

$ git for-each-ref --sort=authordate --format='%(authorname) %(refname)' refs/remotes

Example output:

tux  refs/remotes/origin/dev
agil refs/remotes/origin/main

You can add further formatting, including color coding and string manipulation, for easier readability:

$ git for-each-ref --sort=authordate \
--format='%(color:cyan)%(authordate:format:%m/%d/%Y %I:%M %p)%(align:25,left)%(color:yellow) %(authorname)%(end)%(color:reset)%(refname:strip=3)' \
refs/remotes

Example output:

01/16/2019 03:18 PM tux      dev
05/15/2022 10:35 PM agil     main

You can use grep to get the author of a specific remote topic branch:

$ git for-each-ref --sort=authordate \
--format='%(authorname) %(refname)' \
refs/remotes | grep <topic_branch_name>Get good at branching

There are nuances to how Git branching works depending on the point at which you want to fork the code base, how the repository maintainer manages branches, squashing, rebasing, and so on. Here are three articles for further reading on this topic:

Become an expert at the most common Git tasks for managing local and remote branches.

Image by:

Image by Erik Fitzpatrick. CC BY-SA 4.0

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A programmer's guide to GNU C Compiler

opensource.com - Fri, 05/20/2022 - 15:00
A programmer's guide to GNU C Compiler Jayashree Hutt… Fri, 05/20/2022 - 03:00 Register or Login to like Register or Login to like

C is a well-known programming language, popular with experienced and new programmers alike. Source code written in C uses standard English terms, so it's considered human-readable. However, computers only understand binary code. To convert code into machine language, you use a tool called a compiler.

A very common compiler is GCC (GNU C Compiler). The compilation process involves several intermediate steps and adjacent tools.

Install GCC

To confirm whether GCC is already installed on your system, use the gcc command:

$ gcc --version

If necessary, install GCC using your packaging manager. On Fedora-based systems, use dnf:

$ sudo dnf install gcc libgcc

On Debian-based systems, use apt:

$ sudo apt install build-essential

After installation, if you want to check where GCC is installed, then use:

$ whereis gccSimple C program using GCC

Here's a simple C program to demonstrate how to compile code using GCC. Open your favorite text editor and paste in this code:

// hellogcc.c
#include
 
int main() {
        printf("Hello, GCC!\n");
return 0;
}

Save the file as hellogcc.c and then compile it:

$ ls
hellogcc.c

$ gcc hellogcc.c

$ ls -1
a.out
hellogcc.c

As you can see, a.out is the default executable generated as a result of compilation. To see the output of your newly-compiled application, just run it as you would any local binary:

$ ./a.out
Hello, GCC!Name the output file

The filename a.out isn't very descriptive, so if you want to give a specific name to your executable file, you can use the -o option:

$ gcc -o hellogcc hellogcc.c

$ ls
a.out  hellogcc  hellogcc.c

$ ./hellogcc
Hello, GCC!

This option is useful when developing a large application that needs to compile multiple C source files.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Intermediate steps in GCC compilation

There are actually four steps to compiling, even though GCC performs them automatically in simple use-cases.

  1. Pre-Processing: The GNU C Preprocessor (cpp) parses the headers (#include statements), expands macros (#define statements), and generates an intermediate file such as hellogcc.i with expanded source code.
  2. Compilation: During this stage, the compiler converts pre-processed source code into assembly code for a specific CPU architecture. The resulting assembly file is named with a .s extension, such as hellogcc.s in this example.
  3. Assembly: The assembler (as) converts the assembly code into machine code in an object file, such as hellogcc.o.
  4. Linking: The linker (ld) links the object code with the library code to produce an executable file, such as hellogcc.

When running GCC, use the -v option to see each step in detail.

$ gcc -v -o hellogcc hellogcc.c Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Manually compile code

It can be useful to experience each step of compilation because, under some circumstances, you don't need GCC to go through all the steps.

First, delete the files generated by GCC in the current folder, except the source file.

$ rm a.out hellogcc.o

$ ls
hellogcc.cPre-processor

First, start the pre-processor, redirecting its output to hellogcc.i:

$ cpp hellogcc.c > hellogcc.i

$ ls
hellogcc.c  hellogcc.i

Take a look at the output file and notice how the pre-processor has included the headers and expanded the macros.

Compiler

Now you can compile the code into assembly. Use the -S option to set GCC just to produce assembly code.

$ gcc -S hellogcc.i

$ ls
hellogcc.c  hellogcc.i  hellogcc.s

$ cat hellogcc.s

Take a look at the assembly code to see what's been generated.

Assembly

Use the assembly code you've just generated to create an object file:

$ as -o hellogcc.o hellogcc.s

$ ls
hellogcc.c  hellogcc.i  hellogcc.o  hellogcc.sLinking

To produce an executable file, you must link the object file to the libraries it depends on. This isn't quite as easy as the previous steps, but it's educational:

$ ld -o hellogcc hellogcc.o
ld: warning: cannot find entry symbol _start; defaulting to 0000000000401000
ld: hellogcc.o: in function `main`:
hellogcc.c:(.text+0xa): undefined reference to `puts'

An error referencing an undefined puts occurs after the linker is done looking at the libc.so library. You must find suitable linker options to link the required libraries to resolve this. This is no small feat, and it's dependent on how your system is laid out.

When linking, you must link code to core runtime (CRT) objects, a set of subroutines that help binary executables launch. The linker also needs to know where to find important system libraries, including libc and libgcc, notably within special start and end instructions. These instructions can be delimited by the --start-group and --end-group options or using paths to crtbegin.o and crtend.o.

This example uses paths as they appear on a RHEL 8 install, so you may need to adapt the paths depending on your system.

$ ld -dynamic-linker \
/lib64/ld-linux-x86-64.so.2 \
-o hello \
/usr/lib64/crt1.o /usr/lib64/crti.o \
--start-group \
-L/usr/lib/gcc/x86_64-redhat-linux/8 \
-L/usr/lib64 -L/lib64 hello.o \
-lgcc \
--as-needed -lgcc_s \
--no-as-needed -lc -lgcc \
--end-group
/usr/lib64/crtn.o

The same linker procedure on Slackware uses a different set of paths, but you can see the similarity in the process:

$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/11.2.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o \
hello.o /usr/lib64/crtn.o \
--start-group -lc -lgcc -lgcc_eh \
--end-group

Now run the resulting executable:

$ ./hello
Hello, GCC!Some helpful utilities

Below are a few utilities that help examine the file type, symbol table, and the libraries linked with the executable.

Use the file utility to determine the type of file:

$ file hellogcc.c
hellogcc.c: C source, ASCII text

$ file hellogcc.o
hellogcc.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped

$ file hellogcc
hellogcc: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=bb76b241d7d00871806e9fa5e814fee276d5bd1a, for GNU/Linux 3.2.0, not stripped

The use the nm utility to list symbol tables for object files:

$ nm hellogcc.o
0000000000000000 T main
                          U puts

Use the ldd utility to list dynamic link libraries:

$ ldd hellogcc
linux-vdso.so.1 (0x00007ffe3bdd7000)
libc.so.6 => /lib64/libc.so.6 (0x00007f223395e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2233b7e000)Wrap up

In this article, you learned the various intermediate steps in GCC compilation and the utilities to examine the file type, symbol table, and libraries linked with an executable. The next time you use GCC, you'll understand the steps it takes to produce a binary file for you, and when something goes wrong, you know how to step through the process to resolve problems.

Get a behind-the-scenes look at the steps it takes to produce a binary file so that when something goes wrong, you know how to step through the process to resolve problems.

Image by:

Opensource.com

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Graylog: Industry Leading Log Management for Linux

Tecmint - Fri, 05/20/2022 - 13:48
The post Graylog: Industry Leading Log Management for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

The point of logging is to keep your servers happy, healthy, and secure. If you can’t find the data, you can’t use it effectively or efficiently. If you’re not logging what you need, you

The post Graylog: Industry Leading Log Management for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Linux To Introduce The Ability To Set The Hostname Before Userspace Starts

Phoronix - Fri, 05/20/2022 - 12:00
While the hostname on Linux systems is widely relied upon for different applications, setting the hostname is usually left up to user-space by the init system at boot. However, should any user-space processes try to read the system hostname prior to it being set, it could lead to unintended results. So now finally in 2022 there is a kernel parameter working its way upstream with "hostname=" should you want to ensure the hostname is set before user-space is started...

System76 Scheduler 1.2 Released - Now Has Defaults For SteamVR, Flatpak Process Support

Phoronix - Fri, 05/20/2022 - 06:34
System76-Scheduler as the Linux PC vendor's effort to provide a Rust-written daemon to enhance Linux desktop responsiveness and shipping as part of their Pop!_OS distribution is out with a new feature release...

Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills

The Linux Foundation - Fri, 05/20/2022 - 04:02

This article originally appeared on the LF Training Blog. You can access all of the LF Training resources and courses, including Kubernetes certifications, at here

Faseela K. is a platform development engineer with a background in open source networking. As she saw the use of containers growing more than the VMs she was working with, she began studying Kubernetes and eventually decided to pursue a Certified Kubernetes Administrator (CKA). We spoke to her about her experience.

Linux Foundation: What was the experience like taking the CKA exam?

Faseela K: I was actually nervous, as this was the first online certification exam I was taking from home, so there was some uncertainty going in. Would the proctor turn up on time? Will the cloud platform where we are taking the exam get stuck? Will I be able to finish the exam on time? Those and several other such questions ran through my mind. But I turned down all my concerns, had a very smooth exam experience, and was able to finish it without any difficulties.

LF: How did you prepare for the exam?

FK: I am a person who uses Kubernetes in my day to day work, so the topics in the syllabus were familiar to me. On top of that I did some practice tests and online courses. Preparing for the exam made so many of my day to day work related tasks much easier, and my level of expertise on K8s increased considerably.

LF: How did preparing for and taking CKA help you improve your skills?

FK: Though I work on K8s regularly, the range of concepts and capabilities I was using were minimal. Preparing for CKA helped me touch upon all areas of K8s, and the experience which I already had helped me get a complete end to end view of things. I can troubleshoot Kubernetes issues in a better way now, and go deep into each problem to find a solution.

LF: Tell us more about your current job role. What types of activities are you engaged in and how has the CKA helped with them?

FK: I currently work as a platform development engineer at Cisco, where we develop and maintain an enterprise Kubernetes platform. Troubleshooting, upgrading, networking, and system management of containerized platforms are part of our daily tasks, and CKA has helped me master all these areas with perfection. The training which I took to prepare for the CKA phenomenally transformed my perspective about Kubernetes administration, and this has helped me attain an end to end view of the product. Debugging any issues in the platform has become easier than ever, and the certification has given me even more confidence with fixing issues in a time sensitive manner.

LF: You mentioned to us previously you’d like to take the Certified Kubernetes Application Developer (CKAD) next; what appeals to you about that certification?

FK: I am planning to go deeper into containerized application development in my career, and hence CKAD was appealing to me. In fact, I already completed CKAD and became CKAD certified within less than a month of achieving my CKA certification. The confidence I gained after CKA helped me try the second one also faster.

LF: Tell us about your experience working on the OpenDaylight project. What prompted you to move from focusing on SDN to Kubernetes?

FK: I was previously a member of the Technical Steering Committee of the OpenDaylight project at The Linux Foundation, and made a lot of contributions to OpenDaylight. Working in open source has been the most amazing experience I have ever had in my life, and OpenDaylight gave me exposure to the various activities under LF Networking, while being a part of The Linux Foundation generally helped me engage with some of the top notch brains across organizations.

Coming together from across the globe during various conferences and DDFs, and working together across the company boundaries to solve common SDN problems has given me so much satisfaction. Over a period of time, containers were gaining traction over VMs, and I wanted to get more involved with containerization and platform development, where Kubernetes looked more promising.

LF: What are your future career goals?

FK: I intend to learn more about K8s internal implementation, and also to get involved with projects like istio, servicemesh and networkservicemesh in the future. My dream is to become a cloud native software developer, who promotes containerized application development in a cloud native way.

LF: What technology are you most interested in studying next?

FK: I am currently pursuing a course on the golang programming language. I also plan to take the Certified Kubernetes Security Specialist (CKS) exam if time permits.

The post Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills appeared first on Linux Foundation.

Open Source Software Security: Turning Sand into Concrete

The Linux Foundation - Fri, 05/20/2022 - 02:41

Last week I had the privilege of participating in the Open Source Software Security Summit II in Washington, DC. The Linux Foundation and OpenSSF gathered around 100 participants from enterprise, the U.S. government, and the open source community to agree on an action plan to help increase the security of open source software. 

If you were to look at the attendee list, you would likely be struck by the amount of collaboration among competitors on this issue. But, it isn’t a surprise to the open source community. Security is an excellent example of why organizations participate in open source software projects. 

This is organizations coming together on a joint solution to a common problem so they can focus on innovating.

A question I often receive when I tell people where I work is, Why would for-profit companies want to participate in open source projects? There are lots of reasons, of course, but it boils down to organizations coming together on a joint solution to a common problem so they can focus on innovating. For instance, film studios coming together around software for saving video files or color management or the finance industry improving trader’s desktops or web companies supporting the languages and tools that make the web possible. And these are just a handful of examples.

Security is everyone’s concern and solutions benefit everyone. As one summit participant noted, “My direct competitors are in the room, but this is not an area where we compete. We all want to protect our customers, shareholders, and employees. . . 99% of the time we’re working on the same problems and trying to solve them in a smarter way.”

99% of the time we’re working on the same problems and trying to solve them in a smarter way.

Everyone is better off by sharing vulnerabilities and solutions and working together towards a common goal of a more resilient ecosystem. No company is immune,  everyone relies on multiple open source software packages to run their organization’s software. It is no surprise that competitors are working together on this – it is what the open source community does. 

As we gathered in DC, my colleague Mark Miller talked to participants about their expectations and their perspectives on the meeting. When asked what he hoped to accomplish during the two day summit, Brian Fox of Sonatype said, “The world is asking for a response to make open source better. We are bringing together the government, vendors, competitors, [and] open source ecosystems to see what we can collectively do to raise the bar in open source security.” 

We are bringing together the government, vendors, competitors, [and] open source ecosystems to see what we can collectively do to raise the bar in open source security.

Another participant painted a picture which I find especially helpful, “I remember the old saying, we built the Internet on sand. I thought about that, underscoring the fact that sand is a part of concrete. This process means that we have an opportunity to shore up a lot of the foundation that we built the Internet on, the code that we’re developing.  It is an opportunity to improve upon what we currently have, which is a mixture of sand and concrete. How do we get it all to concrete?”

Enterprise companies and community representatives were at the summit, as well as key U.S. government decision makers. The high-level government officials were there the entire day, participating in the meeting, and listening to the discussions. Their level of participation was striking to me.  I have worked in and around government at the policy level for 25 years – and it is more common than not – for government officials to be invited to speak, come and speak, and then leave right after they deliver their remarks. To see them there one year after implementing the Executive Order on Improving the Nation’s Cybersecurity and engaged signals the importance they place on solving this problem and the respect they have for the group that gathered last week  Kudos to Anne Neuberger, her team, and the others who joined from around the U.S. government. 

By the end of the first day, agreement was reached on a plan, comprised of 10 key initiatives:

  • Security Education Deliver baseline secure software development education and certification to all. 
  • Risk Assessment Establish a public, vendor-neutral, objective-metrics-based risk assessment dashboard for the top 10,000 (or more) OSS components.
  • Digital Signatures Accelerate the adoption of digital signatures on software releases.
  • Memory Safety Eliminate root causes of many vulnerabilities through replacement of non-memory-safe languages.
  • Incident Response Establish the OpenSSF Open Source Security Incident Response Team, security experts who can step in to assist open source projects during critical times when responding to a vulnerability.
  • Better Scanning Accelerate discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.
  • Code Audits Conduct third-party code reviews (and any necessary remediation work) of up to 200 of the most-critical OSS components once per year. 
  • Data Sharing Coordinate industry-wide data sharing to improve the research that helps determine the most critical OSS components.
  • SBOMs Everywhere Improve SBOM tooling and training to drive adoption. 
  • Improved Supply Chains Enhance the 10 most critical OSS build systems, package managers, and distribution systems with better supply chain security tools and best practices.

The full document, The Open Source Software Security Mobilization Plan,  is available for you to review and download.

Of course, a plan without action isn’t worth much. Thankfully, organizations are investing resources. On the day it was delivered, already $30 million was pledged to implement the plan. Organizations are also setting aside staff to support the project: 

Google announced its “new ‘Open Source Maintenance Crew’, a dedicated staff of Google engineers who will work closely with upstream maintainers on improving the security of critical open source projects.” 

Amazon Web Services committed $10 million in funding in addition to engineering resources, “we will continue and increase our existing commitments of direct engineering contributions to critical open source projects.

Intel is increasing its investment: “Intel has a long history of leadership and investment in open source software and secure computing. Over the last five years, Intel has invested over $250M in advancing open source software security. As we approach the next phase of Open Ecosystem initiatives, Intel is growing its pledge to support the Linux Foundation by double digit percentages.”

Microsoft is adding $5 million in additional funding because, “Open source software is core to nearly every company’s tech strategy. Collaboration and investment across the ecosystem strengthens and sustains security for everyone.” 

These investments are the start of an initiative to raise $150M toward implementation of the project. 

Last week’s meeting and the plan mark the beginning of a new and critical pooling of resources – knowledge, staff, and money – to further shore up the world’s digital infrastructure, all built upon a foundation of open source software. It is the next step (well, really several steps) in the journey.

If you want to join the efforts, start at the OpenSSF

The post Open Source Software Security: Turning Sand into Concrete appeared first on Linux Foundation.

Pages