Open-source News

A programmer's guide to GNU C Compiler

opensource.com - Fri, 05/20/2022 - 15:00
A programmer's guide to GNU C Compiler Jayashree Hutt… Fri, 05/20/2022 - 03:00 Register or Login to like Register or Login to like

C is a well-known programming language, popular with experienced and new programmers alike. Source code written in C uses standard English terms, so it's considered human-readable. However, computers only understand binary code. To convert code into machine language, you use a tool called a compiler.

A very common compiler is GCC (GNU C Compiler). The compilation process involves several intermediate steps and adjacent tools.

Install GCC

To confirm whether GCC is already installed on your system, use the gcc command:

$ gcc --version

If necessary, install GCC using your packaging manager. On Fedora-based systems, use dnf:

$ sudo dnf install gcc libgcc

On Debian-based systems, use apt:

$ sudo apt install build-essential

After installation, if you want to check where GCC is installed, then use:

$ whereis gccSimple C program using GCC

Here's a simple C program to demonstrate how to compile code using GCC. Open your favorite text editor and paste in this code:

// hellogcc.c
#include
 
int main() {
        printf("Hello, GCC!\n");
return 0;
}

Save the file as hellogcc.c and then compile it:

$ ls
hellogcc.c

$ gcc hellogcc.c

$ ls -1
a.out
hellogcc.c

As you can see, a.out is the default executable generated as a result of compilation. To see the output of your newly-compiled application, just run it as you would any local binary:

$ ./a.out
Hello, GCC!Name the output file

The filename a.out isn't very descriptive, so if you want to give a specific name to your executable file, you can use the -o option:

$ gcc -o hellogcc hellogcc.c

$ ls
a.out  hellogcc  hellogcc.c

$ ./hellogcc
Hello, GCC!

This option is useful when developing a large application that needs to compile multiple C source files.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Intermediate steps in GCC compilation

There are actually four steps to compiling, even though GCC performs them automatically in simple use-cases.

  1. Pre-Processing: The GNU C Preprocessor (cpp) parses the headers (#include statements), expands macros (#define statements), and generates an intermediate file such as hellogcc.i with expanded source code.
  2. Compilation: During this stage, the compiler converts pre-processed source code into assembly code for a specific CPU architecture. The resulting assembly file is named with a .s extension, such as hellogcc.s in this example.
  3. Assembly: The assembler (as) converts the assembly code into machine code in an object file, such as hellogcc.o.
  4. Linking: The linker (ld) links the object code with the library code to produce an executable file, such as hellogcc.

When running GCC, use the -v option to see each step in detail.

$ gcc -v -o hellogcc hellogcc.c Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Manually compile code

It can be useful to experience each step of compilation because, under some circumstances, you don't need GCC to go through all the steps.

First, delete the files generated by GCC in the current folder, except the source file.

$ rm a.out hellogcc.o

$ ls
hellogcc.cPre-processor

First, start the pre-processor, redirecting its output to hellogcc.i:

$ cpp hellogcc.c > hellogcc.i

$ ls
hellogcc.c  hellogcc.i

Take a look at the output file and notice how the pre-processor has included the headers and expanded the macros.

Compiler

Now you can compile the code into assembly. Use the -S option to set GCC just to produce assembly code.

$ gcc -S hellogcc.i

$ ls
hellogcc.c  hellogcc.i  hellogcc.s

$ cat hellogcc.s

Take a look at the assembly code to see what's been generated.

Assembly

Use the assembly code you've just generated to create an object file:

$ as -o hellogcc.o hellogcc.s

$ ls
hellogcc.c  hellogcc.i  hellogcc.o  hellogcc.sLinking

To produce an executable file, you must link the object file to the libraries it depends on. This isn't quite as easy as the previous steps, but it's educational:

$ ld -o hellogcc hellogcc.o
ld: warning: cannot find entry symbol _start; defaulting to 0000000000401000
ld: hellogcc.o: in function `main`:
hellogcc.c:(.text+0xa): undefined reference to `puts'

An error referencing an undefined puts occurs after the linker is done looking at the libc.so library. You must find suitable linker options to link the required libraries to resolve this. This is no small feat, and it's dependent on how your system is laid out.

When linking, you must link code to core runtime (CRT) objects, a set of subroutines that help binary executables launch. The linker also needs to know where to find important system libraries, including libc and libgcc, notably within special start and end instructions. These instructions can be delimited by the --start-group and --end-group options or using paths to crtbegin.o and crtend.o.

This example uses paths as they appear on a RHEL 8 install, so you may need to adapt the paths depending on your system.

$ ld -dynamic-linker \
/lib64/ld-linux-x86-64.so.2 \
-o hello \
/usr/lib64/crt1.o /usr/lib64/crti.o \
--start-group \
-L/usr/lib/gcc/x86_64-redhat-linux/8 \
-L/usr/lib64 -L/lib64 hello.o \
-lgcc \
--as-needed -lgcc_s \
--no-as-needed -lc -lgcc \
--end-group
/usr/lib64/crtn.o

The same linker procedure on Slackware uses a different set of paths, but you can see the similarity in the process:

$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/11.2.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o \
hello.o /usr/lib64/crtn.o \
--start-group -lc -lgcc -lgcc_eh \
--end-group

Now run the resulting executable:

$ ./hello
Hello, GCC!Some helpful utilities

Below are a few utilities that help examine the file type, symbol table, and the libraries linked with the executable.

Use the file utility to determine the type of file:

$ file hellogcc.c
hellogcc.c: C source, ASCII text

$ file hellogcc.o
hellogcc.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped

$ file hellogcc
hellogcc: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=bb76b241d7d00871806e9fa5e814fee276d5bd1a, for GNU/Linux 3.2.0, not stripped

The use the nm utility to list symbol tables for object files:

$ nm hellogcc.o
0000000000000000 T main
                          U puts

Use the ldd utility to list dynamic link libraries:

$ ldd hellogcc
linux-vdso.so.1 (0x00007ffe3bdd7000)
libc.so.6 => /lib64/libc.so.6 (0x00007f223395e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2233b7e000)Wrap up

In this article, you learned the various intermediate steps in GCC compilation and the utilities to examine the file type, symbol table, and libraries linked with an executable. The next time you use GCC, you'll understand the steps it takes to produce a binary file for you, and when something goes wrong, you know how to step through the process to resolve problems.

Get a behind-the-scenes look at the steps it takes to produce a binary file so that when something goes wrong, you know how to step through the process to resolve problems.

Image by:

Opensource.com

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Graylog: Industry Leading Log Management for Linux

Tecmint - Fri, 05/20/2022 - 13:48
The post Graylog: Industry Leading Log Management for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

The point of logging is to keep your servers happy, healthy, and secure. If you can’t find the data, you can’t use it effectively or efficiently. If you’re not logging what you need, you

The post Graylog: Industry Leading Log Management for Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Linux To Introduce The Ability To Set The Hostname Before Userspace Starts

Phoronix - Fri, 05/20/2022 - 12:00
While the hostname on Linux systems is widely relied upon for different applications, setting the hostname is usually left up to user-space by the init system at boot. However, should any user-space processes try to read the system hostname prior to it being set, it could lead to unintended results. So now finally in 2022 there is a kernel parameter working its way upstream with "hostname=" should you want to ensure the hostname is set before user-space is started...

System76 Scheduler 1.2 Released - Now Has Defaults For SteamVR, Flatpak Process Support

Phoronix - Fri, 05/20/2022 - 06:34
System76-Scheduler as the Linux PC vendor's effort to provide a Rust-written daemon to enhance Linux desktop responsiveness and shipping as part of their Pop!_OS distribution is out with a new feature release...

Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills

The Linux Foundation - Fri, 05/20/2022 - 04:02

This article originally appeared on the LF Training Blog. You can access all of the LF Training resources and courses, including Kubernetes certifications, at here

Faseela K. is a platform development engineer with a background in open source networking. As she saw the use of containers growing more than the VMs she was working with, she began studying Kubernetes and eventually decided to pursue a Certified Kubernetes Administrator (CKA). We spoke to her about her experience.

Linux Foundation: What was the experience like taking the CKA exam?

Faseela K: I was actually nervous, as this was the first online certification exam I was taking from home, so there was some uncertainty going in. Would the proctor turn up on time? Will the cloud platform where we are taking the exam get stuck? Will I be able to finish the exam on time? Those and several other such questions ran through my mind. But I turned down all my concerns, had a very smooth exam experience, and was able to finish it without any difficulties.

LF: How did you prepare for the exam?

FK: I am a person who uses Kubernetes in my day to day work, so the topics in the syllabus were familiar to me. On top of that I did some practice tests and online courses. Preparing for the exam made so many of my day to day work related tasks much easier, and my level of expertise on K8s increased considerably.

LF: How did preparing for and taking CKA help you improve your skills?

FK: Though I work on K8s regularly, the range of concepts and capabilities I was using were minimal. Preparing for CKA helped me touch upon all areas of K8s, and the experience which I already had helped me get a complete end to end view of things. I can troubleshoot Kubernetes issues in a better way now, and go deep into each problem to find a solution.

LF: Tell us more about your current job role. What types of activities are you engaged in and how has the CKA helped with them?

FK: I currently work as a platform development engineer at Cisco, where we develop and maintain an enterprise Kubernetes platform. Troubleshooting, upgrading, networking, and system management of containerized platforms are part of our daily tasks, and CKA has helped me master all these areas with perfection. The training which I took to prepare for the CKA phenomenally transformed my perspective about Kubernetes administration, and this has helped me attain an end to end view of the product. Debugging any issues in the platform has become easier than ever, and the certification has given me even more confidence with fixing issues in a time sensitive manner.

LF: You mentioned to us previously you’d like to take the Certified Kubernetes Application Developer (CKAD) next; what appeals to you about that certification?

FK: I am planning to go deeper into containerized application development in my career, and hence CKAD was appealing to me. In fact, I already completed CKAD and became CKAD certified within less than a month of achieving my CKA certification. The confidence I gained after CKA helped me try the second one also faster.

LF: Tell us about your experience working on the OpenDaylight project. What prompted you to move from focusing on SDN to Kubernetes?

FK: I was previously a member of the Technical Steering Committee of the OpenDaylight project at The Linux Foundation, and made a lot of contributions to OpenDaylight. Working in open source has been the most amazing experience I have ever had in my life, and OpenDaylight gave me exposure to the various activities under LF Networking, while being a part of The Linux Foundation generally helped me engage with some of the top notch brains across organizations.

Coming together from across the globe during various conferences and DDFs, and working together across the company boundaries to solve common SDN problems has given me so much satisfaction. Over a period of time, containers were gaining traction over VMs, and I wanted to get more involved with containerization and platform development, where Kubernetes looked more promising.

LF: What are your future career goals?

FK: I intend to learn more about K8s internal implementation, and also to get involved with projects like istio, servicemesh and networkservicemesh in the future. My dream is to become a cloud native software developer, who promotes containerized application development in a cloud native way.

LF: What technology are you most interested in studying next?

FK: I am currently pursuing a course on the golang programming language. I also plan to take the Certified Kubernetes Security Specialist (CKS) exam if time permits.

The post Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills appeared first on Linux Foundation.

Open Source Software Security: Turning Sand into Concrete

The Linux Foundation - Fri, 05/20/2022 - 02:41

Last week I had the privilege of participating in the Open Source Software Security Summit II in Washington, DC. The Linux Foundation and OpenSSF gathered around 100 participants from enterprise, the U.S. government, and the open source community to agree on an action plan to help increase the security of open source software. 

If you were to look at the attendee list, you would likely be struck by the amount of collaboration among competitors on this issue. But, it isn’t a surprise to the open source community. Security is an excellent example of why organizations participate in open source software projects. 

This is organizations coming together on a joint solution to a common problem so they can focus on innovating.

A question I often receive when I tell people where I work is, Why would for-profit companies want to participate in open source projects? There are lots of reasons, of course, but it boils down to organizations coming together on a joint solution to a common problem so they can focus on innovating. For instance, film studios coming together around software for saving video files or color management or the finance industry improving trader’s desktops or web companies supporting the languages and tools that make the web possible. And these are just a handful of examples.

Security is everyone’s concern and solutions benefit everyone. As one summit participant noted, “My direct competitors are in the room, but this is not an area where we compete. We all want to protect our customers, shareholders, and employees. . . 99% of the time we’re working on the same problems and trying to solve them in a smarter way.”

99% of the time we’re working on the same problems and trying to solve them in a smarter way.

Everyone is better off by sharing vulnerabilities and solutions and working together towards a common goal of a more resilient ecosystem. No company is immune,  everyone relies on multiple open source software packages to run their organization’s software. It is no surprise that competitors are working together on this – it is what the open source community does. 

As we gathered in DC, my colleague Mark Miller talked to participants about their expectations and their perspectives on the meeting. When asked what he hoped to accomplish during the two day summit, Brian Fox of Sonatype said, “The world is asking for a response to make open source better. We are bringing together the government, vendors, competitors, [and] open source ecosystems to see what we can collectively do to raise the bar in open source security.” 

We are bringing together the government, vendors, competitors, [and] open source ecosystems to see what we can collectively do to raise the bar in open source security.

Another participant painted a picture which I find especially helpful, “I remember the old saying, we built the Internet on sand. I thought about that, underscoring the fact that sand is a part of concrete. This process means that we have an opportunity to shore up a lot of the foundation that we built the Internet on, the code that we’re developing.  It is an opportunity to improve upon what we currently have, which is a mixture of sand and concrete. How do we get it all to concrete?”

Enterprise companies and community representatives were at the summit, as well as key U.S. government decision makers. The high-level government officials were there the entire day, participating in the meeting, and listening to the discussions. Their level of participation was striking to me.  I have worked in and around government at the policy level for 25 years – and it is more common than not – for government officials to be invited to speak, come and speak, and then leave right after they deliver their remarks. To see them there one year after implementing the Executive Order on Improving the Nation’s Cybersecurity and engaged signals the importance they place on solving this problem and the respect they have for the group that gathered last week  Kudos to Anne Neuberger, her team, and the others who joined from around the U.S. government. 

By the end of the first day, agreement was reached on a plan, comprised of 10 key initiatives:

  • Security Education Deliver baseline secure software development education and certification to all. 
  • Risk Assessment Establish a public, vendor-neutral, objective-metrics-based risk assessment dashboard for the top 10,000 (or more) OSS components.
  • Digital Signatures Accelerate the adoption of digital signatures on software releases.
  • Memory Safety Eliminate root causes of many vulnerabilities through replacement of non-memory-safe languages.
  • Incident Response Establish the OpenSSF Open Source Security Incident Response Team, security experts who can step in to assist open source projects during critical times when responding to a vulnerability.
  • Better Scanning Accelerate discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.
  • Code Audits Conduct third-party code reviews (and any necessary remediation work) of up to 200 of the most-critical OSS components once per year. 
  • Data Sharing Coordinate industry-wide data sharing to improve the research that helps determine the most critical OSS components.
  • SBOMs Everywhere Improve SBOM tooling and training to drive adoption. 
  • Improved Supply Chains Enhance the 10 most critical OSS build systems, package managers, and distribution systems with better supply chain security tools and best practices.

The full document, The Open Source Software Security Mobilization Plan,  is available for you to review and download.

Of course, a plan without action isn’t worth much. Thankfully, organizations are investing resources. On the day it was delivered, already $30 million was pledged to implement the plan. Organizations are also setting aside staff to support the project: 

Google announced its “new ‘Open Source Maintenance Crew’, a dedicated staff of Google engineers who will work closely with upstream maintainers on improving the security of critical open source projects.” 

Amazon Web Services committed $10 million in funding in addition to engineering resources, “we will continue and increase our existing commitments of direct engineering contributions to critical open source projects.

Intel is increasing its investment: “Intel has a long history of leadership and investment in open source software and secure computing. Over the last five years, Intel has invested over $250M in advancing open source software security. As we approach the next phase of Open Ecosystem initiatives, Intel is growing its pledge to support the Linux Foundation by double digit percentages.”

Microsoft is adding $5 million in additional funding because, “Open source software is core to nearly every company’s tech strategy. Collaboration and investment across the ecosystem strengthens and sustains security for everyone.” 

These investments are the start of an initiative to raise $150M toward implementation of the project. 

Last week’s meeting and the plan mark the beginning of a new and critical pooling of resources – knowledge, staff, and money – to further shore up the world’s digital infrastructure, all built upon a foundation of open source software. It is the next step (well, really several steps) in the journey.

If you want to join the efforts, start at the OpenSSF

The post Open Source Software Security: Turning Sand into Concrete appeared first on Linux Foundation.

Google Makes Public Their Open-Source PSP Security Protocol

Phoronix - Fri, 05/20/2022 - 01:40
Hearing "open-source", "PSP", and "security" all together got me excited with my initial reaction thinking it was about AMD's Platform Security Processor (PSP) albeit that's not the case here. Google's PSP announced today is the "PSP Security Protocol" and is designed for dealing with cryptographic hardware offloading at data center scale and used by Google already in production...

Pages