opensource.com

Subscribe to opensource.com feed
Updated: 6 min 35 sec ago

Happy birthday, Linux! Here are 6 Linux origin stories

Thu, 08/25/2022 - 20:01
Happy birthday, Linux! Here are 6 Linux origin stories AmyJune Hineline Thu, 08/25/2022 - 08:01 Register or Login to like Register or Login to like

On August 25, 1991, Linux 0.01 was announced. All of us have a story to tell about Linux. I told my story a couple of months ago, but for those who weren't here: My first exposure to Linux was when my grassroots hospice organization moved from paper to digital charting. We didn't have the funding to get something proprietary, but the IT department had Linux set up on our old machine, and we used the GNOME desktop and OpenOffice to start our journey in creating digital assets.

I recently asked some Opensource.com authors this simple question:

What was your first Linux experience?

From VAX to Linux

For my junior year of high school, I was shipped off to a state-run "nerd farm" (that's the North Carolina School of Science and Mathematics.) Our first day on campus, the juniors were each assigned a senior big brother or sister. My senior big sister ditched me because she had tickets to go to a big outdoor music festival with her boyfriend, but when they came back all sunburned, we hung out in my mostly empty dorm room eating takeout on the floor. That was when I first met Matt.

As the year wound on, Matt showed me how to help as a student sysadmin changing backup reels for the VAX mainframe and doing basic tasks on the "big" workstation that doubled as a campus-wide UNIX server. He had a PC in his room, with GNU and XWindows on a Minix kernel, but found this cool new alternative that some Finnish student had started posting the source code for on Usenet. I knew, right then and there, that was my future.

When I got home for the summer, the first thing I did was buy a shiny new 486 with some of my savings from odd jobs, fired up a SLIP connection through our local BBS, and downloaded and decoded all the bits and pieces I'd need to bootstrap and compile Linux 0.96.

Matt and I mostly lost touch after he graduated, but I'll always owe him for introducing me to the operating system kernel I'd use for the rest of my life. I think of him every time I see that tattered old copy of Running Linux adorning my office shelf.

The "Matt" in this story is Matthew D. Welsh. After we lost touch, he became the original maintainer of The Linux Documentation Project, and the author of the first edition of the O'Reilly Press book Running Linux.

Jeremy Stanley

Computer club

Friends at a computer club inspired me to try Linux.

I used Linux to help students learn more about other operating systems from 2012 to 2015, and I would say that Linux has taught me more about computers in general.

It has probably affected my "volunteer career" because to this day I write articles about being a neurodiverse person in the Linux world. I also attend and join different Linux events and groups, so I've had access to a community I probably wouldn't have known otherwise.

Rikard Grossman-Nielsen

Galaxy

My Linux story started a long time ago in a galaxy far, far away. In the early 90s, I spent a year in the US as a high school student. Over there, I had access to e-mail and the Internet. When I came back home to Hungary, I finished high school without any Internet access. There were no public Internet providers in Hungary at that time. Only higher education, and some research labs, had Internet. But in 1994, I started university.

The very first wee of school, I was at the IT department asking for an email address. At that time, there was no Gmail, Hotmail, or anything similar. Not even teachers got an email address automatically at the university. It took some time and persistence, but I eventually received my first university email address. At the same time, I was invited to work in the faculty-student IT group. At first, I got access to a Novell and a FreeBSD server, but soon I was asked to give Linux a try.

It was probably late 1994 when I installed my first Linux at home. It was Slackware, from a huge pile of floppy disks. At first, I only did a minimal installation, but later I also installed X so I could have a GUI. In early 1995, I installed my first-ever Linux server at the university on a spare machine, which was also the first Linux server at the university. At that time, I used the Fvwm2 window manager both at home and at the university.

At first, I studied environmental protection at the university, but my focus quickly became IT and IT security. After a while, I was running all the Linux and Unix servers of the faculty. I also had a part time job elsewhere, running web and e-mail servers. I started a PhD about an environmental topic, but I ended up in IT. I've worked with FreeBSD and Linux ever since, helping sudo and syslog-ng users.

Peter Czanik

Education

I got introduced to Linux in the late 1990s by my brother and another friend. My first distro was Red Hat 5, and I didn't like it at the time. I couldn't get a GUI running, and all I could see was the command-line, and I thought, "This is like MS-DOS." I didn't much care for that.

Then a year or more passed, and I picked up a copy of Red Hat 6.1 (I still have that copy) and got it installed on and HP Vectra with a Cyrix chip installed. It had plenty of hard disk space, which was fortunate because the Red Hat Linux software came on a CD. I got the GUI working, and set it up in our technology office at the school district I was employed at. I started experimenting with Linux and used the browser and Star Office (an ancestor of the modern LibreOffice), which was part of the included software.

A couple years later, our school district needed a content filter, and so I created one on an extra computer we had in our office. I got Squid, Squidguard, and later Dansguardian installed on Linux, and we had the first self-hosted open source content filter in a public school district in Western New York State. Using this distribution, and later Mandrake Linux (an ancestor of Mageia Linux) on old Pentium II and Pentium III computers, I set up devices that used SAMBA to provide backup and profile storage for teachers and other staff. Teaming with members of area school districts I set up spam filtering for a fraction of the cost that proprietary solutions were offering at the time.

[ Related read 12 essential Linux commands for beginners ]

Franklinville Central School District is situated in an area of high rural poverty. I could see that using Linux and open source software was a way to level the playing field for our students, and as I continued to repurpose and refurbish the "cast-off" computers in our storage closets, I built a prototype Linux terminal server running Fedora Core 3 and 4. The software was part of the K12LTSP project. Older computers could be repurposed and PXE booted from this terminal server. At one point, we had several computer labs running the LTSP software. Our staff email server ran on RHEL 2.1, and later RHEL 3.0.

That journey, which began 25 years ago, continues to this day as I continue to learn and explore Linux. As my brother once said, "Linux is a software Erector set."

Don Watkins

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Out in the open

My first experience with Linux was brief, and it involved a lot of floppies. As I recall, it was entertaining until my dear wife discovered that her laptop no longer had Windows 98 installed (she was only moderately relieved when I swapped back in the original drive and the "problem" disappeared). That was around 1998, with a Red Hat release that came with a book and a poor unsuspecting ThinkPad.

But really, at work I always had a nice Sun Workstation on my desktop, so why bother? In 2005, we decided to move to France for a while, and I had to get a (usefully) working Toshiba laptop, which meant Linux. After asking around, I decided to go with Ubuntu, so that was my first "real" experience. I think I installed the first distro (codenamed Warty Warthog,) but soon I was on the latest. There were a few tears along the way, caused mostly by Toshiba's choice of hardware, but once it was running that darned laptop was every bit as fast, and way more functional, for me than the old Sun. Eventually, we returned home, and I had a nice new Dell PC desktop. I installed Feisty Fawn, and I've never looked back.

I've tried a few other distros, but familiarity has its advantages, particularly when configuring stuff at the lowest of levels. Really though, if forced to switch, I think I would be happy with any decent Linux distro.

At a few points in time, I have had to do "kernel stuff", like bisecting for bugs and fiddling around with device drivers. I really can't remember the last time something that complicated was necessary, though.

[ Also read How to tune the Linux kernel with the /proc filesystem ]

Right now, I have two desktops and one laptop, all running Ubuntu 22.04, and two aging Cubox i4-pro devices running Armbian, a great Debian-based distro created for people using single-board computers and similar devices. I'm also responsible for a very small herd of various virtual private running several distros, from CentOS to various versions of Ubuntu. That's not to mention a lot of Android-based stuff laying around, and we should recognize that it's Linux, too.

What really strikes me, as I read this back over, is how weird it all must sound to someone who has never escaped the clutches of a proprietary operating system.

Chris Hermansen

Getting involved

The first computer I bought was an Apple, the last Apple was a IIe. I got fed up with the strong proprietorship of Apple over the software and hardware, and switched to an Amiga, which had a nice GUI (incidentally, I have never owned another Apple product.)

Amiga eventually crumbled, and so I switched to Windows—what an awful transition! About this time, somewhere in the mid- to latter-90s, I was finding out about Linux, and began reading Linux magazines and how to set up Linux machines. I decided to set up a dual-boot machine with Windows, then bought Red Hat Linux, which at the time came on a number of floppy disks. The kernel would have been 2.0-something. I loaded it on my hard drive, and Presto! I was using Linux—the command-line. At that time, Linux didn't read all of your hardware and make automatic adjustments, and it didn't have all the drivers you needed, as it does today.

So next came the process of looking up in BBSes or wherever to find out where to get drivers for the particular hardware I had, such as the graphics chip. Practically, this meant booting into Windows, saving the drivers to floppy disk, booting back into Linux, and loading the drivers to the hard drive. You then had to hand-edit the configuration files so that Linux knew which drivers to use. This all took weeks to accomplish, but I can still recall the delight I felt when I typed startx, and up popped X-Windows!!

If you wanted to update your kernel without waiting for and buying the next release, you had to compile it yourself. I remember I had to shut down every running program so the compiler didn't crash.

It's been smooth sailing ever since, with the switch to Fedora (then called "Fedora Core"), and the ease of updating software and the kernel.

Later, I got involved with the Scribus project, and I started reading and contributing to the mail list. Eventually, I began contributing to the documentation. Somewhere around 2009, Christoph Schaefer and I, communicating over the internet and sharing files, were able to write Scribus, The Official Manual in the space of about 9 months.

Greg Pittman

Our contributors share their first Linux experience on the 31st anniversary of the Linux kernel.

Image by:

Opensource.com

Linux Opensource.com community What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 6599 points (Correspondent) Vancouver, Canada

Seldom without a computer of some sort since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005, a full-time Solaris and SunOS user from 1986 through 2005, and UNIX System V user before that.

On the technical side of things, I have spent a great deal of my career as a consultant, doing data analysis and visualization; especially spatial data analysis. I have a substantial amount of related programming experience, using C, awk, Java, Python, PostgreSQL, PostGIS and lately Groovy. I'm looking at Julia with great interest. I have also built a few desktop and web-based applications, primarily in Java and lately in Grails with lots of JavaScript on the front end and PostgreSQL as my database of choice.

Aside from that, I spend a considerable amount of time writing proposals, technical reports and - of course - stuff on https://www.opensource.com.

Open Sourcerer People's Choice Award 100+ Contributions Club Emerging Contributor Award 2016 Author Comment Gardener Correspondent Columnist Contributor Club 4506 points Louisville, KY

Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.

Open Source Sensei Emerging Contributor Award 2017 Awesome Article Award 2019 Python Author Contributor Club 16066 points (Correspondent) Franklinville, New York

Educator, entrepreneur, open source advocate, life long learner, Python teacher. M.A. in Educational Psychology, MSED in Educational Leadership, Linux system administrator, Follow me at @Don_Watkins .

| Follow Don_Watkins Open Source Star People's Choice Award Social Sharer Award 2014 Conversation Starter Award 2015 100+ Contributions Club Best Interview Award 2016 Best Interview Award 2017 Moderator's Choice Award 2018 Moderator's Choice Award 2019 Open education Author Comment Gardener Correspondent Contributor Club 548 points Budapest

Peter is an engineer working as open source evangelist at Balabit (a One Identity business), the company that developed syslog-ng. He assists distributions to maintain the syslog-ng package, follows bug trackers, helps users and talks regularly about sudo and syslog-ng at conferences (SCALE, All Things Open, FOSDEM, LOADays, and others). In his limited free time he is interested in non-x86 architectures, and works on one of his PPC or ARM machines.

| Follow PCzanik Open Source Evangelist People's Choice Award DevOps Linux Open hardware SysAdmin Android CentOS Fedora Author Contributor Club 187 points Kill Devil Hills, NC, USA

A long-time computer hobbyist and technology generalist, Jeremy Stanley has worked as a Unix and GNU/Linux sysadmin for nearly three decades focusing on information security, Internet services, and data center automation. He’s a root administrator for the OpenDev Collaboratory, a maintainer of the Zuul project, and serves on the OpenStack vulnerability management team. Living on a small island in the Atlantic, in his spare time he writes free software, hacks on open hardware projects and embedded platforms, restores old video game systems, and enjoys articles on math theory and cosmology.

Open Minded People's Choice Award Author 116 points Sweden

Hello my name is Richard and I’m an intermediate Linux user diagnosed with ADHD and
Asperger's.

On a daily basis I use Linux for java programming, productivity and gaming.
I’m also a trained teacher, male, 39yrs of age, living in Sweden. I first started using Linux in late 90s. One of the first distros I installed was Redhat due to it's ease of use.
Today I mostly use Ubuntu and Manjaro.

I'm among other things interested in how Linux and open source software can be made more accessible to people with conditions like ADHD, Asperger's and Dyslexia.
mind.

I use accessibility software due to being diagnosed with Asperger's and ADHD.
I mostly use speech synthesis to find spelling errors and calendar software with accommodations.

I can be reached at:
rikardgn@gmail.com

Open Minded Author Contributor Club 1 Comment Register or Login to post a comment. Greg Pittman | August 25, 2022

It's nice to have the same birthday as Linux!

Using eBPF for network observability in the cloud

Thu, 08/25/2022 - 15:00
Using eBPF for network observability in the cloud Pravein Govind… Thu, 08/25/2022 - 03:00 Register or Login to like Register or Login to like

Observability is the ability to know and interpret the current state of a deployment, and a way to know when something is amiss. With cloud deployments of applications as microservices on Kubernetes and OpenShift growing, observability is getting a lot of attention. Many applications come with strict guarantees, such as service level agreements (SLA) for downtimes, latency, and throughput, so network-level observability is a highly imperative feature. Network-level observability is provided by several orchestrators, either natively or by using plugins and operators.

Recently, eBPF (extended Berkeley Packet Filter) emerged as a popular option to implement observability at the end-hosts kernel, due to performance and flexibility. This method enables custom programs to be hooked at certain points along the network data path (for instance, a socket, TC, and XDP). Several open source eBPF-based plugins and operators have been released, and each can be plugged into end-host nodes to provide network observability through your cloud orchestrator.

Existing Observability Tools

The core component of an observability module is how it non-invasively collects the necessary data. To that end, using instrumented code and measurements, we've studied how the design of the eBPF datapath affects performance of an observability module, and the workloads it's monitoring. The artifacts of our measurements are open source and available in our research Git repo. We're also able to provide some useful insights you can use when designing a scalable and high-performance eBPF monitoring data path.

Here are existing open source tools available to achieve observability in the context of both the network and the host:

Skydive

Skydive is a network topology and flow analyzer. It attaches probes to nodes to collect flow-level information. The probes are attached using PCAP, AF_Packet, Open vSwitch, and so on. Instead of capturing the entire packet, Skydive uses eBPF to capture the flow metrics. The eBPF implementation, attached to the socket hook-point, uses a hash map to store flow headers and metrics (packets, bytes, and direction.)

libebpfflow

Libebpfflow is a network visibility library using eBPF to provide network visibility. It hooks on to various points in a host stack, like kernel probes (inet_csk_accept, tcp_retransmit_skb) and tracepoints (net:netif_receive_skb, net:net_dev_queue) to analyze TCP/UDP traffic states, RTT, and more. In addition, it provides process, and the container mapping for the traffic it analyzes. Its eBPF implementation uses perf event buffer to notify TCP state change events to userspace. For UDP, it attaches to the tracepoint of the network device queue and uses a combination of LRU hash map and perf event buffer to store UDP flow metrics.

eBPF Exporter

Cloudflare's eBPF Exporter provides APIs for plugging in custom eBPF code to record custom metrics of interest. It requires the entire eBPF C code (along with the hook point) to be appended to a YAML file for deployment.

Pixie

Pixie uses bpftrace to trace syscalls. It uses TCP/UDP state messages to collect the necessary information, which is then sent to Pixie Edge Module (PEM). In the PEM, the data is parsed according to the detected protocol and stored for querying.

Inspektor

Inspektor is a collection of tools for Kubernetes cluster debugging. It aids the mapping of low-level kernel primitives with Kubernetes resources. It's added as a daemonset on each node of the cluster to collect traces using eBPF for events such as syscalls. These events are written to the perf ring buffer. Finally, the ring buffer is consumed retrospectively when a fault occurs (for example, upon a pod crash).

L3AF

L3AF provides a set of eBPF packages that can be packaged and chained together using tail-calls. It provides a network observability tool, which mirrors traffic based on the flow-id to the user-space agent. Additionally, it also provides an IPFIX flow exporter by storing flow records on a hash map in the eBPF datapath.

Host-INT

Host-INT extends in-band Network Telemetry support to support telemetry for host network stack. Fundamentally, INT embeds the switching delay incurred for each packet into an INT header in the packet. Host-INT does the same for the host network stack between two hosts. Host-INT has two data-path components: a source and sink based on eBPF. The source runs on a TC hook of the sender host's interface, and the sink runs on an XDP hook of the receiver host’s interface. At the source, it uses Hash maps to store flow statistics. Additionally, it adds in an INT header with an ingress/egress port, timestamps, and so on. At the sink, it uses a perf array to send statistics to a sink userspace program on each packet arrival, and sends the packet to the kernel.

Falco

Falco is a cloud-native runtime security project. It monitors system calls using eBPF probes and parses them at runtime. Falco has provisions to configure alerts on activities such as privileged access using privileged containers, read and write to kernel folders, user addition, password change etc. Falco comprises an userspace program as a CLI tool to specify the alerts and obtain the parsed syscall output and a falco driver built over libscap and libsinsp libraries. For syscalls probes falco uses eBPF ring buffers.

Cilium

Observability in Cilium is enabled using eBPF. Hubble is a platform with eBPF hooks running on each node on a cluster. It helps draw insights on services communicating with each other to build a service dependency graph. It also aids Layer 7 monitoring to analyze for e.g. the HTTP calls as well as Kafka topics, Layer 4 monitoring with TCP retransmission rate, and more.

Tetragon

Tetragon is an extensible framework for security and observability in Cilium. The underlying enabler for tetragon is eBPF with data stored using ring buffers but, along with monitoring eBPF is leveraged to enforce policy spanning various kernel components such as virtual file system (VFS), namespace, system call.

Aquasecurity Tracee

Tracee is an event tracing tool for debugging behavioral patterns built over eBPF. Tracee has multiple hook points at tc, kprobes ,etc to monitor and trace the network traffic. At tc hook, it uses a ring buffer (perf) to submit packet-level events to the user-space.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Revisiting the design of Flow metric agent

While motive and implementation differ across different tools, the central component common to all observability tools is the data structure used to collect the observability metrics. While different tools adopt different data structures to collect the metrics, there are no existing performance measurements carried out to see the impact of the data structure used to collect and store observability metrics. To bridge this gap, we implement template eBPF programs using different data structures to collect the same flow metrics from host traffic. We use the following data structures (called Maps) available in eBPF to collect and store metrics:

  1. Ring Buffer
  2. Hash
  3. Per-CPU Hash
  4. Array
  5. Per-CPU Array
Ring Buffer

Ring buffer

is a shared queue between the eBPF datapath and the userspace, where eBPF datapath is the producer and the userspace program is the consumer. It can be used to send per-packet "postcards" to userspace for aggregation of flow metrics. Although this approach could be simple and provide accurate results, it fails to scale because it sends postcards per packet, which keeps the userspace program in a busy loop.

Hash and Per-CPU Hash map

(Per-CPU) Hash map could be used in the eBPF datapath to aggregate per-flow metrics by hashing on the flow-id (for example, 5 tuple IP, port, protocol) and evicting the aggregate information to userspace upon flow completion/inactive. While this approach overcomes the drawbacks of a ring buffer by sending postcards only once per flow and not per packet, it has some disadvantages.

First, there is a possibility of multiple flows being hashed into the same entry, leading to inaccurate aggregation of the flow metrics. Secondly, the hash map necessarily has limited memory for the in-kernel eBPF datapath, so it could be exhausted. Thus userspace program has to implement eviction logic to constantly evict flows upon a timeout.

Array-based map

(Per-CPU) Array-based map can also be used to store per-packet postcards temporarily before eviction to user space, although not an obvious option. The use of arrays poses an advantage by storing per-packet information in the array until it's full and then flushing to userspace only when it's full. This way, it could improve the busy-loop cycle of the userspace compared to using ringbuffer per-packet. Additionally, it does not have the problem of hash collisions of hash map. However, it is complicated to implement because it would require multiple redundant arrays to store per-packet postcards when the main array is flushing out its contents to userspace.

Measurements

So far, we have studied the options that can be used to implement flow metric collection using several data structures. Now it's time to study the performance achieved using a reference implementation of flow metric postcards using each of the above data structures. To do that, we implemented representative eBPF programs which collect flow metrics. The code we used is available on our Git repo. Further, we conducted measurements by sending traffic using a custom-built UDP-based packet generator built on top of PcapPlusPlus.

This graphic describes the experiment setting:

Image by:

(Kannan/Naik/Lev-Ran, CC BY-SA 4.0)

The observe agent is the eBPF datapath performing flow metric collection, hooked at the tc hook-point of the sender. We use two bare-metal servers connected over a 40G link. Packet generation is done using 40 separate cores. To bring these measurements in perspective, libpcap-based Tcpdump which could be used to collect similar flow information.

Single Flow

We initially run the test with single-flow UDP frames. A single flow test can show us the amount of single flow traffic burst the observe agent can tolerate. As shown in the figure below, native performance without any observe agent is about 4.7 Mpps (Million Packets Per Second), and with tcpdump running, the throughput falls to about 2 Mpps. With eBPF, we observed that the performance varies from 1.6 Mpps to 4.7 Mpps based on the data structure used to store the flow metrics. Using a shared data structure such as HashMap, we observed the most significant drop in performance for a single-flow, because each packet writes to the same entry in the map regardless of the CPU it originated from.

Ringbuffer performs slightly better than a single HashMap for a single flow burst. Using a Per-CPU Hash Map, we observed a good increase in throughput performance, because packets arriving from multiple CPUs no longer contend for the same map entry. However, the performance is still half the native performance without any observe agent. (Note that this performance is without handling hash collisions and evictions.)

With (per-cpu) arrays, we see a significant increase in the throughput of a single flow. We can attribute this to the fact there is literally no contention between packets since each packet takes up a different entry in the array incrementally. However, the major drawback in our implementation is we do not handle the array flushing upon full, while it performs writes in a circular fashion. Hence, it stores the last few packet records observed at any point in time. Nevertheless, it provides us the spectrum of performance gains we can achieve by appropriately applying the data structure in the eBPF datapath.

Image by:

(Kannan/Naik/Lev-Ran, CC BY-SA 4.0)

Multi-Flow

We now test the performance of the eBPF observe agents with multiple flows. We generated 40 different UDP flows (1 flow per core) by instrumenting the packet generator. Interestingly, with multiple flows, we observed a stark difference in performance of per-CPU hash and hash map as compared to single flows. This could be attributed to the reduction in contention for a single hash entry.  However, we do not see any performance improvement with ringbuffer since regardless of the flows, the contention channel i.e. ringbuffer is fixed. Array performs marginally better with multiple flows.

Lessons learned

From our studies, we've derrived these conclusions:

  1. Ringbuffer-based per-packet postcards are not scalable, and they affect performance.
  2. Hash Maps limit the "burstiness" of a flow, in terms of packets processed per second. Per-CPU hashmaps perform marginally better.
  3. To handle short bursts of packets within a flow, using an array map to store per-packet postcards would be a good option given array can store a few packet 10s or 100s of packet records. This would ensure that the observe agent could tolerate short bursts without degrading performance.

In our research, we analyzed monitoring of packet-level and flow-level information between multiple hosts in the cloud. We started with the premise that the core feature of observability is how the data is collected in a non-invasive manner. With this outlook, we surveyed existing tools, and tested different methodologies of collecting observability data in the form of flow metrics from packets observed in the eBPF datapath. We studied how the performance of flows were affected by the data structure used to collect flow metrics.

Ideally, to minimize the performance drop of the host traffic due to the overhead of observability agent, our analysis points to a mixed usage of per-cpu array and per-cpu hash data structures. Both  of the data-structures  could be used together to handle short bursts in flows, using an array and aggregation using a per-CPU hash map. We're currently working on the design of an observability agent (https://github.com/netobserv/netobserv-ebpf-agent), and plan to release a future article with the design details and performance analysis compared to existing tools.

[ Download the eBook: Manage your Linux environment for success ]

eBPF extends the Linux kernel to help you monitor the cloud.

Image by:

Opensource.com

Linux Cloud Networking What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 32 points Open Enthusiast Author 31 points India

Priyanka is a Staff Research Scientist at IBM India Research Lab. She broadly works in networked systems. Her research focuses on Telcom and edge networks. Prior to joining IBM she completed her PhD from Indian Institute of Technology Bombay

Open Enthusiast Author Register or Login to post a comment.

7 sudo myths debunked

Wed, 08/24/2022 - 15:00
7 sudo myths debunked Peter Czanik Wed, 08/24/2022 - 03:00 Register or Login to like Register or Login to like

Whether attending conferences or reading blogs, I often hear several misconceptions about sudo. Most of these misconceptions focus on security, flexibility, and central management. In this article, I will debunk some of these myths.

Many misconceptions likely arise because users know only the basic functionality of sudo. The sudoers file, by default, has only two rules: The root user, and members of the administrative wheel group, can do practically anything using sudo. There are barely any limits, and optional features are not enabled at all. Even this setup is better than sharing the root password, as you can usually follow who did what on your systems using the logs. However, learning some of the lesser-known old and new features gives you much more control and visibility on your systems.

If you only know how to give access in sudo to a specific command for a specific user or group, I would recommend reading some of my earlier articles on sudo:

As these article titles suggest, many beneficial possibilities have been available for over a decade without users noticing or using them, and sudo is still continuously developed. My responses to these common misconceptions may teach you about some new features!

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Sudo configuration is stored locally, making it vulnerable

Yes, by default, configuration is stored locally. If you give users root shell or editor access, they can modify the sudoers file. On a single host, there is nothing you can do about it. Once you have multiple hosts, however, there are many ways to solve this problem.

All major configuration management platforms, including Ansible, have support to maintain the sudoers file. Even if the actual configuration is a local file, it is maintained from a central location. Any local changes can be detected, reported, and changed back automatically to the centrally managed version.

Another possibility is using LDAP (lightweight directory access protocol) to store sudo's configuration. It has quite a few limitations—for example, you cannot use aliases—but using LDAP means that the configuration is stored in a central directory, any change is effective immediately, and the local user cannot modify the settings.

Using LDAP for central configuration is difficult

If you have just a couple of freshly installed hosts, getting started with LDAP to store the sudo configuration can be difficult. However, most organizations, even with just a few hosts, already have LDAP or Active Directory (AD) running and personnel who know how to configure and maintain these directory services. Adding sudo support to an already existing directory service is not prohibitively difficult. It is even possible to have both local sudoers and LDAP sudoers, and to specify the order of evaluation, for example, LDAP first, then local, or local first, then LDAP.

Maintaining a sudoers file on multiple hosts is error prone and a compliance problem

Yes, this is right, as long as you edit individual sudoers files by hand. However, as suggested in my response to the previous myth, even with a very low host count, most organizations introduce some kind of directory services, such as LDAP or AD, and configuration management. You can use a directory service to store the sudo configuration centrally, or you can use Ansible and other configuration management applications to maintain the sudoers files on your hosts from a central configuration repository.

The sudo codebase is too large

Yes, it is large. Some even call it a Death Star and say that a large codebase also means that it is insecure. There are smaller software projects; however, those implement only a very basic subset of sudo functionality. Using those, you lose a lot of visibility into what is happening on your systems (just think about session recording). Commercial sudo replacements might implement many sudo features. However, sudo is open source and one of the most analyzed open source codes. Commercial codebases are even larger—and not analyzed by third parties.

Shell access visibility is tricky

Using just the default settings, shell or editor access makes it hard to see what's happening inside a shell session. However, session recordings have been able to make visible what happened inside a shell session for well over a decade. Version 1.9.0 of sudo introduced a central collection of session recordings, so they could not be deleted or modified by the local user. Version 1.9.8 also includes subcommand logging. You can use the logs to check any commands executed in a sudo session and only watch recordings when necessary (for example, if a user starts Midnight Commander). Watching session recordings is tedious and can be very time consuming—some people even have three-day-long sudo sessions—so reviewing logs whenever possible is definitely preferable.

You can't use two-factor authentication in sudo

That is right: There is no out-of-the-box two-factor authentication (2FA) in sudo. However, you can implement 2FA using Linux PAM. Or, if you prefer, you can do it inside sudo. Sudo has a modular architecture and thus can be extended. Version 1.9 of sudo introduced the approval plugin API (application programming interface), making it possible to have additional restrictions before executing a command. You can code your approval plugin in either C or Python and implement 2FA yourself.

More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles Sudo logs do not improve security

If you collect log messages only locally and you do not check them at all, then log messages do not improve security. However, even syslogd, the original syslog implementation from more than three decades ago, supported central log collection. Removing sudo logs from a remote host or a cloud service is not as easy as modifying local logs.

There is also built-in support for central logging in sudo. Using sudo_logsrvd you can collect not only session recordings but event logs as well. In the end, sudo_logsrvd can forward events to syslog (default) or maintain its own log files.

Any questions?

I hope my article helped to resolve some of the myths surrounding sudo. If you have any sudo questions, do not hesitate to reach out to the sudo users mailing list.

The most common misconceptions I've come across involve security, flexibility, and central management. Here, I debunk these sudo myths.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Linux Sysadmin What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Your guide to DistSQL's cluster governance capability

Wed, 08/24/2022 - 15:00
Your guide to DistSQL's cluster governance capability Raigor Jiang Wed, 08/24/2022 - 03:00 1 reader likes this 1 reader likes this

Apache ShardingSphere 5.0.0-Beta version with DistSQL made the project even more beloved by developers and ops teams for its advantages, such as dynamic effects, no restart, and elegant syntax close to standard SQL. With upgrades to 5.0.0 and 5.1.0, the ShardingSphere community has once again added abundant syntax to DistSQL, bringing more practical features.

In this article, the community co-authors will share the latest functions of DistSQL from the perspective of cluster governance.

ShardingSphere clusters

In a typical cluster composed of ShardingSphere-Proxy, there are multiple compute nodes and storage nodes, as shown in the figure below.

Image by:

(Jiang Longtao and Lan Chengxiang, CC BY-SA 4.0)

To make it easier to understand, in ShardingSphere, we refer to proxy as a compute node and proxy-managed distributed database resources (such as ds_0 or ds_1) as resources or storage nodes.

Multiple proxy or compute nodes are connected to the same register center. They share configuration and rules, and they can sense each other's online status. These compute nodes also share the underlying storage nodes, so they can perform read and write operations to the storage nodes at the same time. The user application is connected to any compute node and can perform equivalent operations.

Through this cluster architecture, you can quickly scale proxy horizontally when compute resources are insufficient, reducing the risk of a single point of failure and improving system availability. The load-balancing mechanism can also be added between the application and compute node.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Compute node governance

Compute node governance is suitable for cluster mode. For more information about the ShardingSphere modes, please see Your detailed guide to Apache ShardingSphere's operating modes.

Cluster preparation

Take a standalone simulation of three proxy compute nodes as an example. To use the mode, follow the configuration below:

mode:
type: Cluster
repository:
type: ZooKeeper
props:
namespace: governance_ds
server-lists: localhost:2181
retryIntervalMilliseconds: 500
timeToLiveSeconds: 60
maxRetries: 3
operationTimeoutMilliseconds: 500
overwrite: false

Execute the bootup command separately:

sh %SHARDINGSPHERE_PROXY_HOME%/bin/start.sh 3307
sh %SHARDINGSPHERE_PROXY_HOME%/bin/start.sh 3308
sh %SHARDINGSPHERE_PROXY_HOME%/bin/start.sh 3309

After the three proxy instances are successfully started, the compute node cluster is ready.

SHOW INSTANCE LIST

Use the client to connect to any compute node, such as 3307:

mysql -h 127.0.0.1 -P 3307 -u root -p

View the list of instances using SHOW INSTANCE LIST:

mysql> SHOW INSTANCE LIST;
+----------------+-----------+------+---------+
| instance_id    | host      | port | STATUS  |
+----------------+-----------+------+---------+
| 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled |
| 10.7.5.35@3308 | 10.7.5.35 | 3308 | enabled |
| 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled |
+----------------+-----------+------+---------+

The above fields mean:

  • instance_id: The id of the instance, which is currently composed of host and port
  • host: Host address
  • port: Port number
  • status: The status of the instance, either enabled or disabled
DISABLE INSTANCE

Use a DISABLE INSTANCE statement to set the specified compute node to a disabled state. The statement does not terminate the process of the target instance but only virtually deactivates it.

DISABLE INSTANCE supports the following syntax forms:

DISABLE INSTANCE 10.7.5.35@3308;
#or
DISABLE INSTANCE IP=10.7.5.35, PORT=3308;

For example:

mysql> DISABLE INSTANCE 10.7.5.35@3308;
Query OK, 0 ROWS affected (0.02 sec)
mysql> SHOW INSTANCE LIST;
+----------------+-----------+------+----------+
| instance_id    | host      | port | STATUS   |
+----------------+-----------+------+----------+
| 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled  |
| 10.7.5.35@3308 | 10.7.5.35 | 3308 | disabled |
| 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled  |
+----------------+-----------+------+----------+

After executing the DISABLE INSTANCE statement by querying again, you can see that the instance status of Port 3308 has been updated to disabled, indicating that the compute node has been disabled.

If there is a client connected to 10.7.5.35@3308, executing any SQL statement will prompt an exception:

1000 - Circuit break mode IS ON.

You are not allowed to disable the current compute node. If you send 10.7.5.35@3309 to DISABLE INSTANCE 10.7.5.35@3309, you will receive an exception prompt.

ENABLE INSTANCE

Use an ENABLE INSTANCE statement to set the specified compute node to an enabled state. ENABLE INSTANCE supports the following syntax forms:

ENABLE INSTANCE 10.7.5.35@3308;
#or
ENABLE INSTANCE IP=10.7.5.35, PORT=3308;

For example:

mysql> SHOW INSTANCE LIST;
+----------------+-----------+------+----------+
| instance_id    | host      | port | STATUS   |
+----------------+-----------+------+----------+
| 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled  |
| 10.7.5.35@3308 | 10.7.5.35 | 3308 | disabled |
| 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled  |
+----------------+-----------+------+----------+
mysql> ENABLE INSTANCE 10.7.5.35@3308;
Query OK, 0 ROWS affected (0.01 sec)
mysql> SHOW INSTANCE LIST;
+----------------+-----------+------+----------+
| instance_id    | host      | port | STATUS   |
+----------------+-----------+------+----------+
| 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled  |
| 10.7.5.35@3308 | 10.7.5.35 | 3308 | enabled  |
| 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled  |
+----------------+-----------+------+----------+

After executing the ENABLE INSTANCE statement, you can query again and view that the instance state of Port 3308 has been restored to enabled.

How to manage compute node parameters

In our article Integrating SCTL into DISTSQL's RAL: Making Apache ShardingSphere perfect for database management, we explained the evolution of ShardingSphere control language (SCTL) to resource and rule administration language (RAL) and the new SHOW VARIABLE and SET VARIABLE syntax.

However, in 5.0.0-Beta, the VARIABLE category of DistSQL RAL only contained only the following three statements:

SET VARIABLE TRANSACTION_TYPE = xx; (LOCAL, XA, BASE)
SHOW VARIABLE TRANSACTION_TYPE;
SHOW VARIABLE CACHED_CONNECTIONS;

By listening to the community's feedback, we noticed that querying and modifying the props configuration of proxy (located in server.yaml) is also a frequent operation. Therefore, we have added support for props configuration in DistSQL RAL since the 5.0.0 GA version.

SHOW VARIABLE

First, we'll review how to configure props:

props:
max-connections-size-per-query: 1

kernel-executor-size: 16  # Infinite by default.

proxy-frontend-flush-threshold: 128  # The default value is 128.

proxy-opentracing-enabled: false

proxy-hint-enabled: false

sql-show: false

check-table-metadata-enabled: false

show-process-list-enabled: false

# Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.

# The default value is -1, which means set the minimum value for different JDBC drivers.

proxy-backend-query-fetch-size: -1

check-duplicate-table-enabled: false

proxy-frontend-executor-size: 0 # Proxy frontend executor size. The default value is 0, which means let Netty decide.

# Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL execution

# and block other clients if client connections are more than `proxy-frontend-executor-size`, especially executing slow SQL.

proxy-backend-executor-suitable: OLAP

proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation.

sql-federation-enabled: false

# Available proxy backend driver type: JDBC (default), ExperimentalVertx

proxy-backend-driver-type: JDBC

Now, you can perform interactive queries by using the following syntax:

SHOW VARIABLE PROXY_PROPERTY_NAME;

For example:

mysql> SHOW VARIABLE MAX_CONNECTIONS_SIZE_PER_QUERY;
+--------------------------------+
| max_connections_size_per_query |
+--------------------------------+
| 1                              |
+--------------------------------+
1 ROW IN SET (0.00 sec)
mysql> SHOW VARIABLE SQL_SHOW;
+----------+
| sql_show |
+----------+
| FALSE    |
+----------+
1 ROW IN SET (0.00 sec)
……

Note: For DistSQL syntax, parameter keys are separated by underscores.

SHOW ALL VARIABLES

Since there are plenty of parameters in proxy, you can also query all parameter values through SHOW ALL VARIABLES:

mysql> SHOW ALL VARIABLES;
+---------------------------------------+----------------+
| variable_name                         | variable_value |
+---------------------------------------+----------------+
| sql_show                              | FALSE          |
| sql_simple                            | FALSE          |
| kernel_executor_size                  | 0              |
| max_connections_size_per_query        | 1              |
| check_table_metadata_enabled          | FALSE          |
| proxy_frontend_database_protocol_type |                |
| proxy_frontend_flush_threshold        | 128            |
| proxy_opentracing_enabled             | FALSE          |
| proxy_hint_enabled                    | FALSE          |
| show_process_list_enabled             | FALSE          |
| lock_wait_timeout_milliseconds        | 50000          |
| proxy_backend_query_fetch_size        | -1             |
| check_duplicate_table_enabled         | FALSE          |
| proxy_frontend_executor_size          | 0              |
| proxy_backend_executor_suitable       | OLAP           |
| proxy_frontend_max_connections        | 0              |
| sql_federation_enabled                | FALSE          |
| proxy_backend_driver_type             | JDBC           |
| agent_plugins_enabled                 | FALSE          |
| cached_connections                    | 0              |
| transaction_type                      | LOCAL          |
+---------------------------------------+----------------+
21 ROWS IN SET (0.01 sec)SET VARIABLE

Dynamic management of resources and rules is a special advantage of DistSQL. Now you can also dynamically update props parameters by using the SET VARIABLE statement. For example:

#Enable SQL log output
SET VARIABLE SQL_SHOW = true;
#Turn on hint function
SET VARIABLE PROXY_HINT_ENABLED = true;
#Open federal query
SET VARIABLE SQL_FEDERATION_ENABLED = true;
……

The SET VARIABLE statement can modify the following parameters, but the new value takes effect only after the proxy restart:

  • kernel_executor_size
  • proxy_frontend_executor_size
  • proxy_backend_driver_type

The following parameters are read-only and cannot be modified:

  • cached_connections

Other parameters will take effect immediately after modification.

How to manage storage nodes

In ShardingSphere, storage nodes are not directly bound to compute nodes. One storage node may play different roles in different schemas at the same time, in order to implement different business logic. Storage nodes are always associated with a schema.

For DistSQL, storage nodes are managed through RESOURCE-related statements, including:

  • ADD RESOURCE
  • ALTER RESOURCE
  • DROP RESOURCE
  • SHOW SCHEMA RESOURCES
Schema preparation

RESOURCE-related statements only work on schemas, so before operating, you need to create and use the USE command to successfully select a schema:

DROP DATABASE IF EXISTS sharding_db;
CREATE DATABASE sharding_db;
USE sharding_db;ADD RESOURCE

ADD RESOURCE supports the following syntax forms:

  • Specify HOST, PORT, DB
ADD RESOURCE resource_0 (
HOST=127.0.0.1,
PORT=3306,
DB=db0,
USER=root,
PASSWORD=root
);
  • Specify URL
ADD RESOURCE resource_1 (
URL="jdbc:mysql://127.0.0.1:3306/db1?serverTimezone=UTC&useSSL=false",
USER=root,
PASSWORD=root
);

The above two syntax forms support the extension parameter PROPERTIES, which is used to specify the attribute configuration of the connection pool between the proxy and the storage node.

For example:

ADD RESOURCE resource_2 (
HOST=127.0.0.1,
PORT=3306,
DB=db2,
USER=root,
PASSWORD=root,
PROPERTIES("maximumPoolSize"=10)
),resource_3 (
URL="jdbc:mysql://127.0.0.1:3306/db3?serverTimezone=UTC&useSSL=false",
USER=root,
PASSWORD=root,
PROPERTIES("maximumPoolSize"=10,"idleTimeout"="30000")
);

Specifying Java Database Connectivity (JDBC) connection parameters, such as useSSL, is supported only with URL form.

ALTER RESOURCE

Use ALTER RESOURCE to modify the connection information of storage nodes, such as changing the size of a connection pool or modifying JDBC connection parameters.

Syntactically, ALTER RESOURCE is identical to ADD RESOURCE.

ALTER RESOURCE resource_2 (
HOST=127.0.0.1,
PORT=3306,
DB=db2,
USER=root,
PROPERTIES("maximumPoolSize"=50)
),resource_3 (
URL="jdbc:mysql://127.0.0.1:3306/db3?serverTimezone=GMT&useSSL=false",
USER=root,
PASSWORD=root,
PROPERTIES("maximumPoolSize"=50,"idleTimeout"="30000")
);

Since modifying the storage node may cause metadata changes or application data exceptions, ALTER RESOURCE cannot be used to modify the target database of the connection. Only the following values can be modified:

  • User name
  • User password
  • PROPERTIES connection pool parameters
  • JDBC parameters
DROP RESOURCE

Use DROP RESOURCE to delete storage nodes from a schema without deleting any data in the storage node. The statement example is as follows:

DROP RESOURCE resource_0, resource_1;

To ensure data correctness, the storage node referenced by the rule cannot be deleted.

t_order is a sharding table, and its actual tables are distributed in resource_0and resource_1. When resource_0 and resource_1 are referenced by t_order sharding rules, they cannot be deleted.

SHOW SCHEMA RESOURCES

SHOW SCHEMA RESOURCES is used to query storage nodes in schemas and supports the following syntax forms:

#Query the storage node in the current schema
SHOW SCHEMA RESOURCES;
#Query the storage node in the specified schema
SHOW SCHEMA RESOURCES FROM sharding_db;

For example, add four storage nodes through the ADD RESOURCE command, and then execute a query:

Image by:

(Jiang Longtao and Lan Chengxiang, CC BY-SA 4.0)

There are many columns in the query result, but here we only show part of them.

Conclusion

In this article, we have introduced you to the ways you can dynamically manage storage nodes through DistSQL.

Unlike modifying YAML files, executing DistSQL statements happens in real time, and there is no need to restart the proxy or compute node, making online operations safer. Changes executed through DistSQL can be synchronized to other compute nodes in the cluster in real time through the register center. The client connected to any compute node can also query changes of storage nodes in real time.

If you have any questions or suggestions about Apache ShardingSphere, please open an issue on the GitHub issue list. If you are interested in contributing to the project, you're very welcome to join the Apache ShardingSphere community.

Apache ShardingSphere Project Links:

This article originally appeared on FAUN and is republished with permission.

A feature update to Apache ShardingSphere enhances the dynamic management of storage nodes.

Image by:

Jason Baker. CC BY-SA 4.0.

Databases What to read next Learn more about distributed databases with ShardingSphere This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I migrated to NetworkManager keyfiles for configuration

Tue, 08/23/2022 - 15:00
How I migrated to NetworkManager keyfiles for configuration David Both Tue, 08/23/2022 - 03:00 Register or Login to like Register or Login to like

NetworkManager was introduced in 2004 to make network configuration more flexible and dynamic. The old SystemV startup shell scripts, of which the interface configuration files were a part, were incapable of handling WiFi, wired, VPNs, broadband modems, and more—or at least incapable of doing it quickly or efficiently.

In a series of articles, I've written about why I'm a fan of NetworkManager and how I've used it. In part 1, I looked at what NetworkManager does and some of the tools it provides for viewing network connections and devices. In that article, I mentioned that NetworkManager does not need interface configuration files for most hosts. However, it can create its own ini-style configuration files, and it recognizes the older network interface configuration files. The NetworkManager configuration files are officially called keyfiles. In part 2, I looked at the deprecated interface configuration files and how to configure them, should you still be using them.

Support for the deprecated ifcfg files is no longer provided by default for new installations beginning with Fedora 36. It will continue to use them on systems that have been upgraded from earlier versions of Fedora to release 36—at least for a while longer. Still, it is not a good idea at this late stage to depend on deprecated ifcfg configuration files. So for part 3 of this series, I will demonstrate migrating existing interface configuration files to the NetworkManager keyfiles using the command-line tool provided. I will also look at using both command-line and GUI tools to create new keyfiles from scratch and compare them for ease of use.

The migration is considerably more straightforward than it sounds. I used the nmcli connection migrate command on the two systems I needed to migrate, one with a single network interface card (NIC) and one, my router/firewall, with three NICs. After some extensive testing on a VM, it also worked perfectly the first time on both production hosts. That's it: No other commands, options, or arguments required. And it is fast, taking much less than one second on both hosts.

Why should I migrate my files?

Most of the restrictions of the old shell scripts lay in the structure—or lack thereof—of the ifcfg files. NetworkManager introduced the new network connection keyfiles to overcome those issues. But until Fedora 36, it still would recognize the old ifcfg configuration files. Now, NetworkManager no longer creates or supports ifcfg files for new installations.

I experimented with NetworkManager on a new Fedora 36 installation and could not convince it to use newly created ifcfg files. It continued to treat the interfaces as dynamic host configuration protocol (DHCP) and obtain its configuration values from the DHCP server. The ifcfg files are no longer supported on new installations because the NetworkManager-initscripts-ifcfg-rh package is no longer installed. That package contains the tools needed to use the ifcfg files. Hosts upgraded from older releases of Fedora will still have the NetworkManager-initscripts-ifcfg-rh package installed, so it will, for the time being, be upgraded along with the rest of the installation to Fedora 36. This may not be true in the future.

If you are using DHCP configuration for your network hosts, you do not need to migrate any ifcfg files. In fact, you can simply delete them, if they still exist, and NetworkManager will deal with managing the network connections. Personally, I prefer to move deprecated files like these to an archive subdirectory in /root so that I can find them later, just in case.

All hosts with static connections should be migrated. This usually includes servers, firewalls, and other hosts that may need to perform their network functions without the DHCP server being active. I have two hosts like this: my main server and my firewall/router.

My experiments

When NetworkManager officially deprecated the interface configuration files located in /etc/sysconfig/network-scripts, it did not immediately stop using them, but the update procedure did drop in a readme file, /etc/sysconfig/network-scripts/readme-ifcfg-rh.txt. This short file states explicitly that the ifcfg-style files are deprecated. It also provides a simple command that performs the migration for us.

I suggest you read that file on your host and then experiment in a non-production environment. I used a VM for my experiments and learned a lot. Before I started making changes, I displayed the connection data shown below to get the current state of the network connection.

[root@myserver ~]# nmcli
enp0s3: connected to Wired connection 1
        "Intel 82540EM"
        ethernet (e1000), 08:00:27:07:CD:FE, hw, mtu 1500
        ip4 default
        inet4 192.168.0.136/24
        route4 192.168.0.0/24 metric 100
        route4 default via 192.168.0.254 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 192.168.0.52 8.8.8.8 8.8.4.4
        domains: example.org
        interface: enp0s3

I created a simple ifcfg file that defines a static configuration on one of my VMs then tested it to verify that this static config worked properly. Here is the ifcfg-enp0s3 file I created for this testing:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
# HWADDR=08:00:27:07:CD:FE
IPADDR=192.168.0.95
PREFIX=24
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=enp0s3
ONBOOT=yes
DNS1=192.168.0.52
DNS2=8.8.8.8
AUTOCONNECT_PRIORITY=-999
DEVICE=enp0s3

I commented out the hardware address in the ifcfg-enp0s3 file because it does not seem necessary. I tried it both ways, and it works just as well either way—once I finally got it working at all. NetworkManager completely ignored the contents of this file until I installed the NetworkManager-initscripts-ifcfg-rh package. After that, NetworkManager was able to set the network configuration from this ifcfg-enp0s3 file.

Then it was time to try the migration tool. I ran the command shown below to migrate the ifcfg file to a keyfile.

[root@myserver system-connections]# nmcli connection migrate
Connection 'Wired connection 1' (c7b11d30-522e-306f-8622-527119911afc) successfully migrated.
[root@myserver system-connections]#

This command took less than a second. It creates the new keyfile and then deletes the ifcfg file. I suggest making a copy of the original ifcfg file before running this migration tool. It created the /etc/NetworkManager/system-connections/enp0s3.nmconnection file for my host. Without specifying a specific interface, this command will migrate all ifcfg files located in /etc/sysconfig/network-scripts. If a host has multiple NICs and corresponding ifcfg files, only some of which you want to migrate, you can specify a list of connections to migrate.

The keyfiles can be modified using your favorite editor. I tried this by changing the IPADDR entry and restarting NetworkManager just to make sure it worked. The nmcli connection reload command did not work for me. Making changes directly to the keyfiles using an editor is not recommended, but it does work. To be honest, many experienced sysadmins (like me) really prefer editing ASCII text configuration files directly, so—recommended or not—that is how I do things most of the time. I just like to know what is actually in those files so I can recognize when something is wrong with them. It helps with solving configuration problems.

Doing it for real

After a day of experimenting so that I fully understood how this all works and how to recover in case it fails, I was ready to do it for real. I chose my main server for this initial attempt because it only has a single NIC, which will make it faster to get back online if there is a problem.

First, I copied the file /etc/sysconfig/network-scripts/ifcfg-eno1 shown in below to /root as a backup. The nmcli connection migrate command can make the conversion back from keyfile to ifcfg file. But why bother when I can have an exact backup ready to drop back in?

HWADDR=e0:d5:5e:a2:de:a4
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPADDR=192.168.0.52
PREFIX=24
GATEWAY=192.168.0.254
DOMAIN=example.org
IPV6INIT=no
DNS1=192.168.0.52
DNS2=8.8.8.8
DNS3=8.8.4.4
IPV4_FAILURE_FATAL=no
IPV6INIT=no
PEERROUTES=no
NAME="enp0s31f6"
ONBOOT=yes
AUTOCONNECT_PRIORITY=-999
DEVICE="enp0s31f6"

After running the nmcli connection migrate command, I verified that it emits the status line to indicate that the conversion took place, which it did. I next verified that the ifcfg file was gone and the /etc/NetworkManager/system-connections/enp0s31f6.nmconnection keyfile was in place:

[connection]
id=enp0s31f6
uuid=abf4c85b-57cc-4484-4fa9-b4a71689c359
type=ethernet
autoconnect-priority=-999
interface-name=enp0s31f6

[ethernet]
mac-address=E0:D5:5E:A2:DE:A4

[ipv4]
address1=192.168.0.52/24,192.168.0.254
dns=192.168.0.52;8.8.8.8;8.8.4.4;
dns-search=example.org;
ignore-auto-routes=true
method=manual

[ipv6]
addr-gen-mode=stable-privacy
method=ignore
never-default=true

[proxy]

This file will not be used until the NetworkManager is restarted or the host is rebooted. I first restarted NetworkManager and then checked the result, as shown below. The network configuration looks correct:

[root@myserver ~]# nmcli
enp0s31f6: connected to enp0s31f6
        "Intel I219-V"
        ethernet (e1000e), E0:D5:5E:A2:DE:A4, hw, mtu 1500
        ip4 default
        inet4 192.168.0.52/24
        route4 default via 192.168.0.254 metric 100
        route4 192.168.0.0/24 metric 100

lo: unmanaged
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
        servers: 192.168.0.52 8.8.8.8 8.8.4.4
        domains: example.org
        interface: enp0s31f6

After a complete reboot, I verified the network configuration again, and it looked identical to the output above. With that working, I removed the NetworkManager-initscripts-ifcfg-rh package and rebooted again, just because it can't hurt to verify everything.

Once I knew that the migration tool works on one of my production systems, and an important one at that, I was ready to do this on my firewall/router, the one with three NICs. I ran the same nmcli connection migrate command on that host and verified the results. After ensuring all was working correctly, I used DNF to remove the NetworkManager-initscripts-ifcfg-rh package from both production hosts. And I tested with a couple more reboots of each host just to ensure nothing got borked during the removal of the initscripts package.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles What if I don't have ifcfg files?

New installations of Fedora don't create any type of network interface configuration files. The default is for NetworkManager to handle network interfaces as DHCP connections. So you don't need to do anything for hosts that use DHCP to obtain their network configuration information.

However, you may need to create a static configuration for some new hosts even when you don't have a deprecated ifcfg file to migrate.

Reverting to DHCP

Reversion to the use of DHCP is easy. Just remove the keyfile for the desired connection from /etc/NetworkManager/system-connections/ and restart the NetworkManager. Remove can mean moving the file somewhere else or just deleting it.

In preparation for my next series of experiments in creating new keyfiles, I moved the enp0s31f6.nmconnection keyfile to /root and restarted NetworkManager.

Creating new keyfiles

Although the old ip command can still be used to modify network interface settings in a live environment, those changes are not persistent after a reboot. Changes made using NetworkManager tools such as nmcli or nmtui, the GUI NetworkManager connection editor (nm-connection-editor), and your favorite text editor are persistent. The connection editor is available for Fedora on the system tray for each of the desktops I tried—Xfce, Cinnamon, LXDE, KDE Plasma—and probably the rest of the desktops I haven't yet tried.

Text editor

Assuming you are familiar with the keyfile structure, syntax, and variables, creating or modifying keyfiles from scratch is possible with just an ASCII text editor. As much as I appreciate and use that capability, using one of the three tools provided is usually much simpler.

Using nmtui

The nmtui tool (NetworkManager Text User Interface) is my second choice for a tool in this trio. I find the interface cumbersome, unattractive, and not intuitive. This tool is not installed by default, and I probably would not have installed it if I were not writing this article.

However, it does work, and it created a keyfile for me that was essentially identical to the one created by the GUI Connection Manager I discuss below. The only differences I found  (using the diff command, of course) were the timestamp field in the file and one different selection I intentionally made when configuring the connection. The interface does provide some clues about the data you need to provide to create a working keyfile.

Start this tool by entering the command nmtui on the command line. In general, the arrow keys allow movement between the fields on the displayed pages, and the Enter key selects an item to modify or add. The Page Up/Page Down keys scroll the page. Select Edit a connection and press Enter to create a new keyfile.

Image by:

(David Both, CC BY-SA 4.0)

After wending my way through the interface, I arrived at the Edit Connection page. It was not clear to me from this interface that the CIDR prefix should be appended to the IP address, but I did that anyway, and it worked. Fill in the appropriate data on this page to configure the interface. Notice that I have disabled IPV6.

Image by:

(David Both, CC BY-SA 4.0)

Next, scroll down to the bottom of the page using the keyboard and press OK to save the keyfile. The keyfile is saved immediately, but NetworkManager must be restarted to activate this file, whether new or changed. Although this is not my favorite interface for creating and managing NetworkManager keyfiles, I plan to use it when the GUI Connection Editor is unavailable, such as when working on a remote host.

Using nmcli

I have used the nmcli tool (Network Manager Command Line Interface) to configure an interface in the past, and this tool also works very well. I just like it the least because it requires the most typing and reading of the man page and online references. Executing the command immediately creates the interface configuration file in the /etc/NetworkManager/system-connections/ directory.

The command shown below adds the needed keyfile, just like the other tools.

[root@myserver system-connections]# nmcli connection add connection-name enp0s3-Wired ifname enp0s3 type ethernet ipv4.addresses 192.168.0.136/24 ipv4.gateway 192.168.0.254 ipv4.dns 192.168.0.254,8.8.8.8,8.8.4.4 ipv4.dns-search example.org ipv6.method disabled
Connection 'ethernet-enp0s3' (67d3a3c1-3d08-474b-ae91-a1005f323459) successfully added.
[root@myserver system-connections]# cat enp0s3-Wired.nmconnection
[connection]
id=ethernet-enp0s3
uuid=67d3a3c1-3d08-474b-ae91-a1005f323459
type=ethernet
interface-name=enp0s3

[ethernet]

[ipv4]
address1=192.168.0.136/32,192.168.0.254
dns=192.168.0.52;8.8.8.8;8.8.4.4;
dns-search=example.org;
method=manual

[ipv6]
addr-gen-mode=stable-privacy
method=disabled

[proxy]
[root@myserver system-connections]#

One of the assistance tools available while using nmcli connection add is the Bash tab-completion sequence that shows the available subcommands:

[root@myserver system-connections]# nmcli connection add
autoconnect                        ifname                             ipv6.dhcp-send-hostname
con-name                           ipv4.addresses                     ipv6.dhcp-timeout
connection.auth-retries            ipv4.dad-timeout                   ipv6.dns
connection.autoconnect             ipv4.dhcp-client-id                ipv6.dns-options
connection.autoconnect-priority    ipv4.dhcp-fqdn                     ipv6.dns-priority
connection.autoconnect-retries     ipv4.dhcp-hostname                 ipv6.dns-search
connection.autoconnect-slaves      ipv4.dhcp-hostname-flags           ipv6.gateway
connection.dns-over-tls            ipv4.dhcp-iaid                     ipv6.ignore-auto-dns
connection.gateway-ping-timeout    ipv4.dhcp-reject-servers           ipv6.ignore-auto-routes
connection.id                      ipv4.dhcp-send-hostname            ipv6.ip6-privacy
connection.interface-name          ipv4.dhcp-timeout                  ipv6.may-fail
connection.lldp                    ipv4.dhcp-vendor-class-identifier  ipv6.method
connection.llmnr                   ipv4.dns                           ipv6.never-default
connection.master                  ipv4.dns-options                   ipv6.ra-timeout
connection.mdns                    ipv4.dns-priority                  ipv6.required-timeout
connection.metered                 ipv4.dns-search                    ipv6.route-metric
connection.mud-url                 ipv4.gateway                       ipv6.routes
connection.multi-connect           ipv4.ignore-auto-dns               ipv6.route-table
connection.permissions             ipv4.ignore-auto-routes            ipv6.routing-rules
connection.read-only               ipv4.may-fail                      ipv6.token
connection.secondaries             ipv4.method                        master
connection.slave-type              ipv4.never-default                 match.driver
connection.stable-id               ipv4.required-timeout              match.interface-name
connection.timestamp               ipv4.route-metric                  match.kernel-command-line
connection.type                    ipv4.routes                        match.path
connection.uuid                    ipv4.route-table                   proxy.browser-only
connection.wait-device-timeout     ipv4.routing-rules                 proxy.method
connection.zone                    ipv6.addresses                     proxy.pac-script
help                               ipv6.addr-gen-mode                 proxy.pac-url
hostname.from-dhcp                 ipv6.dhcp-duid                     slave-type
hostname.from-dns-lookup           ipv6.dhcp-hostname                 tc.qdiscs
hostname.only-from-default         ipv6.dhcp-hostname-flags           tc.tfilters
hostname.priority                  ipv6.dhcp-iaid                     type
[root@myserver system-connections]# nmcli connection add

I typically prefer the command line for most tasks. However, the complexity of getting the syntax and options of this command correct means that I must always use the man page and research the command before I issue it. That takes time. And it still complained about things I missed or got incorrect. Even when it did not throw an error, it created keyfiles that worked poorly, if at all. For example, the connection worked when I would SSH out from the test VM, but I could not SSH into the test VM. I am still not sure what the problem was, but that keyfile had the wrong CIDR prefix for the IP address. I eventually got the command correct by referring to the example on the manual page nmcli-examples(7).

When this is the only available method, I can do it, but it is my least preferred tool.

Using the GUI NetworkManager connection editor

I have used one of my laptops for parts of this section to show both wired and wireless connections. Although I typically prefer command-line tools, I like this GUI NetworkManager connection editor tool best of all the three available tool options. It is easy to use, intuitive, provides fast access to any configuration item that would ever be needed, and is directly available itself in the desktop system tray of all the desktops I have tried.

Just right-click on the network icon, the one that looks like a pair of computers, in the system tray. Then choose Edit Connections.

Image by:

(David Both, CC BY-SA 4.0)

This opens the connection editing window, as pictured below. Double-click the desired connection from the connection list, usually Wired Connection 1 or a WiFi SSID. The illustration below shows both wired and wireless connections open for editing on one of my laptops. I have never needed to edit a wireless connection because the ones I connect to always use DHCP for configuration. It is possible to require static addressing for wireless connections, but I have never encountered that.

Image by:

(David Both, CC BY-SA 4.0)

The Ethernet tab of the Editing Wired Connection 1 dialog window shows the device name enp111s0 for this laptop. In most cases, nothing on this page needs to be changed.

Back on my VM, I changed the Method field from Automatic (DHCP) to Manual. I added the IP Address, the CIDR prefix, and the default route (gateway) I want for this host. I also added three DNS servers and the search domain. These are the minimum configuration variables needed for a network connection. They are also the same ones defined in the interface configuration files and the previous keyfiles. The device name for this NIC is enp0s3. Here is the configuration for the wired connection using the GUI NetworkManager connection editor tool.

Image by:

(David Both, CC BY-SA 4.0)

Another option available for the Method field is Disabled. I set the IPV6 to Disabled since I don't use IPV6.

After setting these values, clicking the Save button creates the new keyfile immediately. Making changes to existing keyfiles is just as easy. However, NetworkManager must be restarted for these configuration changes to take effect.

In terms of the amount of time and work involved in creating new NetworkManager keyfiles, the GUI Connection Editor is far better than the other options. It provides an easy-to-use interface with enough information about the data required to be helpful.

Conclusions

Fedora 36 changes the equation for using the old-style, deprecated interface configuration files. For new installations of Fedora 36, those files will not work unless the NetworkManager-initscripts-ifcfg-rh package is explicitly installed. This is a warning sign that all support for those deprecated ifcfg scripts will be completely ignored in the future.

Fortunately, the migration from any existing ifcfg scripts is trivially easy, and creating new ones is not much more difficult using one of the three tools available. I prefer the GUI NetworkManager connection editor tool because it is clear and easy. I can use the nmtui tool, which does the same thing as the GUI version but has a somewhat clunkier user interface. I try not to use the nmcli tool if I can help it. It does work but is cumbersome and takes a lot of reading and experimentation to get the correct command syntax and all of the right arguments to create a fully usable keyfile.

So go ahead and migrate now. I did, and it was easy.

Interface configuration files may not be supported in Fedora much longer, but migrating to NetworkManager is easier than you might think.

Image by:

Opensource.com

Linux Sysadmin What to read next Get started with NetworkManager on Linux A sysadmin's guide to network interface configuration files This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Mentoring as a power multiplier in open source

Mon, 08/22/2022 - 15:00
Mentoring as a power multiplier in open source Josh Solomon Mon, 08/22/2022 - 03:00 Register or Login to like Register or Login to like

Many developers struggle with work-life balance. They are overloaded with regular tasks and frequently called upon to solve urgent customer issues. Yet many developers, like me, have ideas they want to promote if only they had time to do so. Sometimes these pet ideas do not make it through the product management feature-selection process. Other times, a developer does not have time to complete the solution end-to-end on their own, yet they know if it could somehow be implemented, the project would benefit.

This is where mentoring comes in. Mentorship has many benefits: It helps the mentee's personal development, and it can improve the mentor's self-confidence and leadership skills. I discussed these benefits in an interview about skillful mentoring in the Red Hat Research Quarterly that includes many tips for a rewarding mentor relationship.

However, in the scenario I just described, mentorship has another very practical benefit: It helps with enrolling developers in the projects the mentor-developer wants to promote. In this article, I will explain how you can use mentoring as a power multiplier in the open source software (OSS) world and create a multi-win situation by mentoring people who further contribute to the open source community.

This article is based on my experience mentoring students as part of the collaboration between my organization and Reichman University, but it can apply to any mentorship situation.

Open source software is a gateway

OSS is a great entry ticket for many people into any of several exciting software community projects. It can help undergraduate and graduate students gain visible experience so they improve their first job search. Or, it can help more experienced programmers find interesting projects (plenty of those are available) or particularly challenging projects (plenty of those as well). In many of these cases, there can be a good match between people who are looking for programming opportunities and the projects looking for more developers.

The first step is leveraging the opportunity.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Building a multi-win combination

Developers have projects they want to push; outside programmers want to contribute to open source projects. But how can they find each other?

There are multiple mechanisms that can be used to match projects with contributors. First, I want to share my mentorship story; then, I will discuss other options.

My story started when I was invited to offer research projects to Reichman University students. In collaboration with industry partners, the university was running a course and lab in which students were offered various projects sponsored by many companies. The students could then implement them with support and mentoring from those companies.

This was the first year the course was running, and our team had little time to prepare. To act quickly, I had to pull some of my backlog projects that I wanted to implement but didn't have the capacity to do on my own. One of the projects we offered was adding a compression layer to the communications path between components in the Ceph software-defined storage product to reduce the cost of deployments on public clouds. This was a complex project because it involved changes in the data path of a large and sophisticated storage system.

As it happened, this was a very good decision. For their presentations at the end of the course, most students were doing research-oriented, nonproduction projects, such as a proof of concept or a simple research project for choosing the best AI algorithm for a specific problem. Our team, on the other hand, offered a more challenging, full-blown feature that would reside in a huge production project. If we had had more time to plan, I am not sure this would have been our decision, but fortunately, necessity led to a great solution. Now we plan to continue mentoring with production projects on purpose.

It is crucial to match the mentee to the task. At Reichman University, there are many grad students returning to the university after spending some years as developers in the software industry. This meant that we could present relatively complex projects. Fortunately for all involved, Maya Gilad chose to implement the compression project and came with this background along with the desire to contribute to open source projects. She was a perfect fit and could start contributing with a relatively small ramp-up.

Following the required course deliverables, Maya—with my help as a business mentor and Or Friedman's help as a technical mentor—prepared a requirements document and design document and implemented the Ceph compressed communications feature. While working on this feature, we even discovered an additional use case that was not part of the original requirements. By the end of the semester, we had a pull request (PR) ready for Ceph.

The end of this story demonstrates several wins for all involved:

  • During student class presentations, Maya's project was the only one that was actually translated into code in a large production system.
  • From Maya's academic perspective, this was a great success, and she got an A+ grade for this course. Additionally, she was exposed to the Ceph community.
  • The Ceph project got a new feature that is important for deployments on public clouds.
  • Maya is still working and contributing to the open source community.

The road of the PR has just started. This is usually a place where mentees will need a lot of help, since this is likely a new process for them.

As I write this article, we are running the third year of this course. In this round, two students selected another project that I didn't have time to implement myself, this time in Kubernetes. I enrolled a technical mentor that will help me, and our excitement and expectations are very high.

More mentorship opportunities

I am lucky to be involved in an industry-academy collaboration project, but what can you do if you are not involved in such a project?

There are other alternatives for collaboration with students around the world on open source projects. The students are paid for their work, and this is a great opportunity to help students making their first steps in the industry.

The two most popular platforms for making these connections are:

If you are interested in finding interns that will help you, these are very good places to start.

Tip: There is a lot of competition within these platforms, as with the Reichman University projects. Invest some time thinking about how to present your project so that suitable candidates will select it. This is a good marketing exercise, and if you do a good job, it will pay off well.

What are you waiting for?

If this blog piques your interest, here is a summary checklist for a successful mentorship project:

  • Select a well-framed project, and make sure you know and communicate the success criteria for such a project. Make sure it fits the work you plan for the mentees. Both you and the mentees want the project to succeed. Overly complex projects will increase the chances of failure.
    • Make sure the project is well defined.
    • Verify that it can be completed in the desired timeframe.
    • Define timely milestones, and make sure there is a minimal scope, possibly with some additional nice-to-have improvements.
    • If the project is complex, consider breaking it into multiple PRs.
       
  • Make sure you have all the mentor resources you need. You may need some help in mentoring. There is no need to limit yourself to things you can do on your own!
    • Don't forget that you need to merge the PRs—make sure you have enough community support for this step.
       
  • Think of the best way to sell the project to potential mentees—usually, there is strong competition for mentees.
     
  • Find a platform (or platforms) for matching projects and mentees/interns. Study its rules and comply with them.
     
  • Involve the upstream community early. Try to get their feedback on the design as soon as possible; this will pay off when the PR is submitted.

Remember: A successful project with a positive experience for the mentees can lead to additional engagements in OSS communities. A bad mentee experience may do the opposite, so make one of the primary goals of your project a positive mentorship experience that encourages mentees to stay involved.

Need more hands to get open source projects done? Consider the power of mentorship.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Community management Careers What to read next Our journey to open source during Google Summer of Code 7 tips for virtual mentorship in open source This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My first impression of GNOME Console on Linux

Mon, 08/22/2022 - 15:00
My first impression of GNOME Console on Linux Alan Formy-Duval Mon, 08/22/2022 - 03:00 Register or Login to like Register or Login to like

New on the GNOME desktop is their terminal emulator application simply named Console. It seems aimed at providing a no-nonsense, stable command-line environment for Linux users.

Introducing GNOME Console for Linux Image by:

​(Alan Formy-Duval, CC BY-SA 4.0)

The GNOME Console isn't as feature-rich as a lot of other terminals, including the previous GNOME terminal, but I like it and have been using it regularly for the past few months. I enjoy the simplicity of it. I waste time in other terminal emulators configuring fonts, colors, and profiles.

Console does have some options and nice integration with the GNOME desktop. I'll start with the small menu accessed by clicking the hamburger menu at the top-right. It allows for configuring the color theme and font zooming.

Image by:

​(Alan Formy-Duval, CC BY-SA 4.0)

This menu also provides a view of available Keyboard Shortcuts and an option to launch a new window. Finally, you can access the typical About window as shown in the first screenshot above.

In addition to new windows, the small [+] button next to the hamburger opens a new tab within the currently active window.

On the top-left is a search button with the familiar magnifying glass icon. It allows for search and highlighting of text. There is also a small menu when you right-click within the Console window. It provides three more options; Paste, Select All, and the one I think is the neatest is  Show in Files, which I discuss further in the next section.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles GNOME Console context awareness

The GNOME Console has a few ways that it provides some context awareness. The first is the Show in Files option I mentioned above. This feature opens the GNOME graphical file manager to the current present working directory of your terminal. This reminds me of the opposite feature in the GNOME File Manager called Open in Terminal. Now, with the GNOME Console installed, a new second option called Open in Console is available. I think this is a nice integration detail.

The second option lets the toolbar simply indicate your present working directory under the title. You can see this in several of my screenshots. Note that the GNOME Terminal also has this feature.

Another way the GNOME Console follows the context is to change its toolbar color according to privilege level. Whenever the user has elevated root privileges, this bar turns red. This can quickly be demonstrated with the sudo command.

This screenshot shows a normal toolbar while logged in as my non-privileged self.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

After I run sudo bash, the top bar turns red. This can help the user remember to be careful given that using the root comes with great responsibility.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

 

Install the GNOME Console on Linux

The GNOME Console wasn't installed by default on my Fedora Linux Desktop system possibly because I had upgraded several times from older versions. It appears that the GNOME Console became the default terminal emulator in Gnome 42. If you don't see it, just install it manually, either with dnf or use the software center.

$ sudo dnf install gnome-consoleClean graphics

 

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

 

The GNOME Console is a nice clean terminal emulator application. I have been using it for a while now and feel like it “just works”. It is possible that the GNOME Project plans to make it a full replacement to the GNOME Terminal in a future release. It's still too soon to say whether it will gain additional features, but it presents the possibility of a modern experience with good functionality and tight integration with the GNOME Desktop Environment. I suggest you give it a try!

You may have noticed in the screenshots above that after I ran sudo bash, the root username prompt changed color to red. This is not a feature of the GNOME Console. This is a change I previously made to the root user's environment for the same purpose as the red toolbar. There are many ways this can be done, but in case you're interested in how I did it, the prompt color is controlled by the following line in the root user's .bashrc file:

PS1='[\033[01;32m]\u@\h[\033[00m]:[\033[01;34m]\w[\033[00m]$ '

Console is GNOME desktop's new terminal emulator. Try it out for a fresh experience that has tight integration with the GNOME Desktop Environment.

Image by:

Gunnar Wortmann via Pixabay. Modified by Opensource.com. CC BY-SA 4.0.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What's your favorite screenshot tool on Linux?

Sat, 08/20/2022 - 15:00
What's your favorite screenshot tool on Linux? AmyJune Hineline Sat, 08/20/2022 - 03:00 Register or Login to like Register or Login to like

As the saying goes, a picture is worth a thousand words, and while that's not always the case with terminal commands and code, it still holds true for the graphical desktop. Screenshots capture precisely what's on your screen. I love taking them to have a record of who attends meetings, so I don't have to write it down at the moment. Or to capture a bug when doing UI testing. We all take them for different reasons, though, and there are more ways to take a screenshot than you might at first think.

I started thinking about screenshots after Jim Hall wrote an article listing GNOME screenshots, GIMP, and Firefox as the ways he often takes screenshots. And yet that's just the beginning, as I quickly found out when I asked Opensource.com authors how they each take screenshots.

Making a spectacle Image by:

(Seth Kenlon, CC BY-SA 4.0)

I use Spectacle. It works perfectly for my simple needs.

David Both

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I use KDE. It ships with Spectacle, which seems to be responsible for taking a screenshot when I push the PrtScr (Print Screen) key.

A nice feature is that the default action is to take a screenshot immediately when you press PrtScr, but then it brings up the Spectacle interface, so you can take more sophisticated screenshots (a rectangular area, the window under your cursor, and so on.)

Greg Pittman

Framing the shot

For a long time I had wanted to capture only a small amount of the screen in a screenshot, not the whole thing, but struggled to know how.

Since then I've installed KolourPaint. I open the full screenshot inside the program, and cut out the part I want to keep. Hope this could help others suffering the same screenshot dilemma!

Laurence Urhegyi

I use Shift+PrtSc to capture a small amount of the screen in a screenshot.

Agil Antony

Emacs

A while back I created an Elisp function to take a screenshot from Emacs.

Sachin Patil

Flameshot Image by:

(Seth Kenlon, CC BY-SA 4.0)

Flameshot, the one and only! Nothing is missing in this wonderful tool: doodling, arrows, adding text, a pixelate tool for blurring out sensitive information, an autoincrementing counter bubble, save, copy, the ability to open the screenshot in a selected application, and the list goes on and on. Once I installed it, I've never looked for anything else!

A friendly hint: when installing from Flatpak, you might want to use Flatseal to grant access to your home folder, otherwise the Save dialog will feel somewhat empty.

Tomasz Waraksa

ImageMagick #!/bin/bash
current=$(date +%H-%M-%S-%d-%m-%Y).png
if [[ -z "${1}" ]]; then
   import -window root "${HOME}/${current}" # All screen
else
   import "${HOME}/${current}" # Custom screenshot
fi

notify-send "Screenshot ${current} taken successfully!"

—Suporte Terminal Root

GNOME Image by:

(Seth Kenlon, CC BY-SA 4.0)

As a mostly GNOME Desktop user, I was happy taking screenshots with the regular PrtSc, Ctrl+PrtSc, or Shift+PrtSc keys. My favorite is Shift because it allows me to select an area of the screen.

Recently, GNOME recently introduced an improved screenshot tool when you simply hit PrtSc. I haven't even used it that much yet, so I'm looking forward to trying it out thoroughly on some future articles.

Alan Formy-Duval

As a satisfied GNOME user, I've been using the built-in screenshot tool. With the older version, I would screenshot a window with Shift+PrtSc. Now I just use PrtSc and select the region with the tool. I like the new one better, but if I had to go back to the old, that'd be OK too.

Chris Hermansen

XFCE Screenshooter Image by:

(Seth Kenlon, CC BY-SA 4.0)

I've been using XFCE lately, and xfce4-screenshooter has been doing an excellent job. Like the rest of XFCE, it's minimal but highly functional, with options to capture the entire screen, the active window, or just a region. You can even activate or deactivate whether the mouse cursor is included in the shot.

Klaatu

Grim and Slurp

I have a fun little alias that I use for screenshots:

alias sshot='; grim -g "$(slurp)" screenshot-$(date +%s).png 2> /dev/null'

It lets me draw a rectangle on my screen, and it captures just that area. The command uses grim and slurp, both available in the Fedora repos.

But this only works on Wayland. On X11, you can replace them with maim and scrot.

Mohammed Saud

Your screenshot tool

What's your screenshot tool of choice? Tell us in the comments!

There are many open source screenshot tools to choose from, but which one works for you?

Linux Opensource.com community What to read next Getting started with ImageMagick This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My journey with Kubernetes

Fri, 08/19/2022 - 15:00
My journey with Kubernetes Mike Dame Fri, 08/19/2022 - 03:00 Register or Login to like Register or Login to like

Recently, I published my first book, The Kubernetes Operator Framework Book from Packt Publishing. Writing a book has always been a personal goal of mine, and so it seems fitting that I was able to check that off by writing about one of my favorite topics: Kubernetes.

My journey with Kubernetes began in 2016, as a software engineer for Red Hat OpenShift. There, I had the opportunity to work with (and learn from) some of the smartest folks in the open source community. I learned first-hand some of the best practices for Kubernetes development as they were applied to broad enterprise use cases. And as I watched the development of OpenShift 4 take shape, I got to witness the functionality of Kubernetes Operators cranked to the max as the platform was built almost entirely around the Operator pattern. There, Operators were not just minor automation or deployment controllers; they were literally powering an entire Kubernetes distribution. I just happened to be lucky enough to have front-row seats to a transformative display of Operators in action.

Unfortunately, I still meet people in the community who are confused about Operators, how they work, and the benefits they can bring to cloud developers and customers. It seems that Operators are a topic about which many are curious, but few have the resources to truly invest in exploring.

That's why I wanted to write this book: to provide a high-level introductory overview of Operators and the breadth of possibilities that their use offers, so that more people can learn and benefit from running them in their clusters. I felt that my experience gave me a novel perspective on Operator development and use cases such that I could explain them through a unique narrative.

That narrative builds a storyline for The Kubernetes Operator Framework Book that gives readers a holistic, big-picture guide through the development lifecycle of an Operator. The book begins by introducing the fundamental topics of Operators broken into three pillars: the Operator SDK, OLM, and OperatorHub. These pillars respectively represent the three main phases of an Operator's lifecycle: coding, deployment, and distribution.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

Following the introduction, the book goes on to explore some of the technical capabilities of Operators and identifies a sample use case for a basic Operator, which serves as the single example threaded throughout the rest of the book. That example strings together the different pillars of the Operator Framework into a unified tutorial for developing, running, and publishing an Operator (written in Go). Along the way, this includes topics like designing CRDs, using the Operator SDK tools, and implementing additional functionality like metrics reporting with Prometheus to add observability insights to your Operator. Finally, Operator developers' roles and responsibilities for ongoing maintenance are explored, such as when and how to release new versions and keep your dependencies in sync with the broader Kubernetes ecosystem of projects. All of these topics are then summarized with a few case studies of third-party Operators, which are clinically dissected to demonstrate the concepts learned through the book's tutorial in a real-world application.

The goal of the book is not to provide all the answers for building an Operator, but instead to provoke ideas about how Operators can best serve you and your users. By framing common software development concepts (such as understanding the specific needs of your users and tackling challenges such as deprecation) through the lens of Operator development, The Kubernetes Operator Framework Book reads differently than many textbooks which focus on deep technical details and advanced topics. It is a conversational introduction for the reader who is familiar with Kubernetes, has heard of Operators, and is curious to learn what kind of impact Operator development can have for their organization.

Researching and writing this book was an incredibly rewarding experience that would not have been possible without the countless mentors in the Kubernetes community who took the time to teach me about this wonderful technology. The Kubernetes Operator Framework Book is my attempt at paying that forward, and hopefully passing on some of what I have learned to all of the other eager learners who make this community so great. I hope you enjoy reading it as much as I enjoyed writing it.

I wrote The Kubernetes Operator Framework Book to pass on some of what I have learned to all of the other eager learners who make this open source community so great.

Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 note-taking apps for Linux

Fri, 08/19/2022 - 15:00
5 note-taking apps for Linux Don Watkins Fri, 08/19/2022 - 03:00 Register or Login to like Register or Login to like

Notes are part of any writer's life. Most of my articles begin in a note-taking application and that’s usually Joplin for me. There are a large number of note-taking apps for Linux and you may use something other than my favorite. A recent blog article reminded me of a half dozen of them, so I assembled a list of my favorites.

Joplin Image by:

(Opensource.com, CC BY-SA 4.0)

Joplin is available on Linux, Windows, macOS, Android, and iOS. I like Joplin because it automatically saves whatever you add to it. Notes can be uploaded to NextCloud, OwnCloud, Joplin Cloud, and even closed source services like OneDrive, Dropbox, or any WebDav applications. Joplin supports encryption.

It’s easy to export notes in a variety of formats, too. It comes with eight different themes that allow you to tailor its look.

Joplin has an MIT license. Initially released in 2017 Joplin is under continuous development with a large community of contributors.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Xournal Image by:

(Opensource.com, CC BY-SA 4.0)

Xournal is available on Linux, Windows, macOS, and Android. Its aim is to let you create notes containing nearly any media type you can imagine. It supports pressure-sensitive stylus and drawing tablets so you create sketchnotes. You can type into it, draw simple vectors, import graphics, record audio, and more. You can also use Xournal to annotate PDFs, which is how I have used it. It is released with a GPLv2 license, and you can export notes in a variety of formats.

Trillium Image by:

(Opensource.com, CC BY-SA 4.0)

Trillium is a hierarchical note-taking application with a focus on knowledge building bases. It features rich WYSIWYG editing with tables, images, and markdown. It has support for editing notes in source code with syntax highlighting. It's released under the Gnu Affero License.

Trilium is available as a desktop application for Linux and Windows, as well as a web application that you can host on your own Linux server.

Gnote Image by:

(Opensource.com, CC BY-SA 4.0)

Gnote is an open source note taking application written for Linux. It was cloned by Hubert Figuière from a project called Tomboy. Like Tomboy, Gnote uses a wiki-like linking system to allow you to link notes together.

GNote's source code is available on GitLab. The software is licensed with GPLv3.

CherryTree Image by:

(Opensource.com, CC BY-SA 4.0)

CherryTree supports hierarchical note-taking. In CherryTree everything is a node. Nodes can be plain text, rich text, syntax highlighting for a variety of programming languages. Each node can have child nodes each with a different format.

CherryTree features rich text and syntax highlighting, and can store data in a single XML or SQLite file. CherryTree can import from a variety of formats including Markdown, HTML, plain text, Gnote, Tomboy, and others. It can export files to PDF, HTML, plain text and its own CherryTree format.

CherryTree is licensed under the GPLv3, and can be installed on Linux, Windows, and macOS.

Use these open source tools for jotting down notes.

Image by:

Startup Stock Photos. Creative Commons CC0 license.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source runs on non-code contributions

Thu, 08/18/2022 - 15:00
Open source runs on non-code contributions John E. Picozzi Thu, 08/18/2022 - 03:00 1 reader likes this 1 reader likes this

At this year's DrupalCon North America, EPAM Solution Architect John Picozzi presented a talk about the importance of non-code contribution. He talked about how everyone can get involved and why he believes this is an important topic. This article is a text adaptation of John's talk; find a link below to a video recording of the complete presentation at DrupalCon.

What is non-code contribution? I asked Google this question and got the following answer: "Any contribution that helps an open source project that does not involve writing code." Thanks, Google, but I already figured that out. If you asked me to dig deeper, I'd say it's about providing your time, skills, and resources to benefit a project.

Who is an open source contributor?

Early on, "contribution" implied writing code. Originally, Drupal's model was "Built by developers, for developers." Over the years, however, the Drupal community has shifted away from that mindset. Our community has learned to value non-code contributions just as much as code: Any contribution is contribution.

Open source is built in meetups, camps, and cons; it's built-in and by the community. In fact, most of the contributions at those events have very little to do with coding. To have those events, you need attendees, speakers, trainers, and organizers. Don't get me wrong: Of course, open source communities still need people who write code, but that's not the only thing they need. If you participate in the community and share ideas, ask questions, or provide help—congratulations, you're already contributing!

Is contributor a self-designation ("I'm a contributor") or a community designation ("We say you're a contributor")? It's safe to say that everyone is a contributor: conference attendees, designers who create UI and module logos, marketing folks who help market modules or events, and many more. Don't wait for someone else to give you that designation. You can get involved and feel confident telling others you're a contributor.

There are many ways to motivate someone (or yourself) to contribute. Money is not always the top motivator. However, sometimes contribution can be paid work. Many people contribute simply because they want to give back to the community.

Everyone would probably give a different answer from their peers when asked why they contribute, but here are some of the most common responses:

  • It makes you feel good
  • Building and improving skills
  • Career development
  • Making connections/networking

The list is endless and as varied as the contributors themselves. Each contributor has their own reasons, and there are no right or wrong answers.

Image by:

(John Picozzi, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Why non-code contribution is important to open source

Non-code contribution is as valuable to the health of a project as writing code. It helps to get more people with a wide variety of skills involved in the community. Everyone has something to offer and a unique skill set to share.

There are non-code requirements for all projects, and not everyone is a developer or coder. Moreover, different points of view need to be represented. For example a marketing person will likely have different experiences and perspectives than a developer. Every effort moves open source forward in some way—that's why non-code contribution is essential.

Common challenges

This definition of contribution may make it sound very simple: Just share your knowledge, express your thoughts, and help the community. However, contributors face several challenges. One of the most common is imposter syndrome. Less experienced contributors may worry that their contribution isn't valuable or helpful. You can combat that feeling by focusing on your specific skills and passions. For example, if you have event organizing experience, you can lean into that and focus on organizing and helping with those activities.

To combat these negative thoughts, make contributing a positive experience. Work/life/contribution balance is important. Contribution should be enjoyable, not just another job. If you can, implement contribution into your work. Many employers encourage and benefit from your contribution, and it's possible to build a career based on contribution.

Don't burn out and contribute nonstop during nights and weekends. Just add 30 minutes to the start or end of your day, or incorporate contribution into your regular workday if possible.

How to make your first non-code contribution

At this point in the article, I hope you're thinking, "OK, I'm ready. What do I do?" How do you get involved? Just do it! You only need to get started: For example, to start contributing in the Drupal community, ask in the issue queue or Drupal chat or reach out to camp organizers for recommendations. A whole community is waiting to support you.

Image by:

(John Picozzi, CC BY-SA 4.0)

Remember to follow your skills and interests. You have them, so use them to inspire your contributions. Your interests may differ from your skills: You could decide to contribute to something you have little experience with but always wanted to know more about. Simply talk to people, share knowledge, ask questions, go to a camp or a meetup, and contribute.

I want to close with a quote by Margaret Mead (an American anthropologist) that perfectly describes open source contribution to me: "Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has." Dr. Mead doesn't say "a small group of code writers or developers." She says a thoughtful, committed group of citizens—citizens with great passion and many different skills. That's what powers open source, and that's what powers Drupal.

Watch the talk below or on YouTube.

Sometimes the hardest part of becoming an open source contributor is realizing how much you have to offer.

Image by:

Opensource.com

Community management What to read next 8 non-code ways to contribute to open source Why every job in the tech industry is technical This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Measure latency for embedded systems with this open source tool

Wed, 08/17/2022 - 15:00
Measure latency for embedded systems with this open source tool Nicolas Rabault Wed, 08/17/2022 - 03:00 Register or Login to like Register or Login to like

When it comes to time synchronization for embedded systems in a distributed architecture, there are "soft" use cases (typical, everyday devices) and complex, or "hard," use cases (car brake systems, aerospace). It's easy to see that a hard use case has unique requirements and a low tolerance for latency. To put it another way, hard use cases have hard, real-time constraints.

Latency is the bitter enemy of real-time computation. Latency, in the context of a critical real-time application, is the time between data generation and data reception. The reality is that when there is an interconnected network of several systems, there's latency.

[ Related read Edge computing: Latency matters ]

What's the difference between soft and hard real-time use cases?

A soft use case is easy to manage. Here are some examples of soft use cases:

  • A washing machine with a detergent dispenser that needs to be refilled after every wash.
  • A printer that needs to be refilled with paper.
  • A car that needs to be refilled with gasoline or recharged with electricity.

Hard cases, however, are complicated to synchronize and complex to connect. The following are examples of hard use cases:

  • Car braking systems where no other system (such as steering) can interfere or cause latency.
  • Nuclear power plants where a sensor must send back a status report in real-time to enable decision-making without disruption from another component of the system.
  • A rocket, which must be able to correct its trajectory in real-time to avoid being compromised by external elements such as weather conditions.
How the embedded world deals with real-time

Consistent latency is no guarantee when working with distributed (multi-MCU) critical and real-time environments. If there happens to be many collisions in a system, a message could be delayed multiple times, dramatically and unpredictably increasing latency. In such a situation, younger (newer) data could arrive before older data, compromising the system's integrity.

Image by:

(Nicolas Rabault, CC BY-SA 4.0)

Typically, there is no real-time clock (RTC) in the embedded world because that requires a power source, which isn't always possible, and even if there were one, the system would lose time measurement in the event of a power disruption. The same problem is true when you try to update time from the Internet.

In embedded systems, nothing handles time tracking when a system is off, and nothing synchronizes time when the system powers on. In most cases, the timeline starts when the system starts.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Luos allows timelines management

While it is physically impossible to remove latency completely, monitoring it is important to guarantee data validity. By measuring latencies in real-time, the open source Luos engine, released under the Apache2 license, uses the latency value to know the real, local date of an event. There's no global timeline, just a delta between data generation and consumption. Luos precisely measures that delta with any network.

Luos is an open source software and a modular methodology to simplify the creation and sharing of embedded features. Luos encapsulates hardware and software functions as microservices so that each electronic device has a set of functions that communicate and recognize each other but remain independent.

Without Luos, the developer must synchronize timelines. It's up to the developer to control latency and to update the dates on each system so that each one has the same time repository. It's hard to do and uses a lot of resources.

With distributed latency-based time synchronization, a developer no longer needs to work with a global timeline, which is often difficult, given that each node has a different timeline. Luos consolidates these timelines to avoid having one "correct" point in time and instead allows each system to have control. This design is completely multi-master. The reference timeline is the local timeline of the node the developer is looking through. Luos is able to remap an event date across the node's local timeline by measuring latency.

Real-time developers are used to working with a global timeline and might question whether the method used by Luos is accurate, given such critical use cases. The answer is yes because the nodes communicate and all have the same level of information without having a centralized master. It's as if synchronization were happening each time data is modified in the global system.

[ Get the eBook, Open source data pipelines for intelligent applications ]

How Luos works

Technically, Luos computes latency. Across nodes a sum of delays exists, including the source latency, the target latency, and the network latency. Luos evaluates that information and sums it up to remap the information on the local timeline using a timestamp.

It is possible to measure an event in the past or the future. Thus, Luos can use the measurement for data collection and for precisely programmed commands.

Do you want to get started with Luos and embedded systems? Go deeper with Luos on Luos.io.

By evaluating latency, Luos manages timelines without synchronization on multiple nodes.

Image by:

Opensource.com

Internet of Things (IoT) What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

4 common issues with implementing Agile and how to address them

Wed, 08/17/2022 - 15:00
4 common issues with implementing Agile and how to address them Kelsea Zhang Wed, 08/17/2022 - 03:00 1 reader likes this 1 reader likes this

While working on the open source ZenTao project, I get constant feedback that getting Agile up and running is a big task in many organizations. As with any new process, you will run into issues, and many of them will feel unique to your organization. While context is important, there's a certain amount of abstraction possible after you've coached enough teams. This article covers the four most common issues I've encountered. While your Agile coach should analyze any actual problems in the context of your organization, knowing these general issues can help better prepare you and your teams for the transitional process.

Note that I only discuss issues that have been found and not how to find issues, which is another topic entirely!

Lack of Agile awareness

I consider this the most significant issue. You can detect this issue in conversations between business departments, managers, and team members. They emphasize delivery as a single event that happens at a specific time. They talk about making "more plans," and you hear phrases like "deliver more work results" and "it's Agile, so why don't you work overtime?"

There's no single solution for this. You can only remedy these misunderstandings with results. Don't get bogged down in trying to correct perception; instead, focus on luring people into an Agile way of thinking with the benefit of Agile productivity.

Similarly, to lower any perceived barriers, you can reduce the use of specialized Agile terminology as much as possible when communicating with people who don't understand Agile yet.

Lack of support from business departments

This issue can determine whether Agile implementation can succeed. Business departments may fail to attend meetings, fail to clarify stories, and provide no feedback. At the same time, however, they may ask R&D teams to deliver work results according to "quality and quantity."

There are a few possible reasons for this issue:

  • The business department is aggressive. Once they're unsatisfied with the R&D department, they complain arbitrarily.
  • The business department focuses on its own work, and working with the R&D department is out of its responsibility and assessment.
  • The business department is disappointed with the R&D department and believes support to be pointless.
  • The business department has no time to provide support.

Here are some suggestions for addressing the problem:

  • Do your best to choose a business team with a high support level.
  • Be friendly! You can get a lot of recognition and support by increasing friendly and respectful communication.
  • Bind the interests of the business team together with the R&D department.
  • Rebuild trust with the business department through transparency.
  • Business departments understand contracts. Negotiate with your business department to identify what's expected of them and what's required from them in terms of communication and support.
Lack of team participation

This problem is usually the easiest to detect. Team participation is key; you can generally identify right away when you don't have it. You see it when managers fail to lead a team, and team members don't feel empowered or inspired to improve the team's processes.

There are a few possible reasons for the lack of team participation:

  • The company's performance assessment restricts teams from self-organizing. For example, an evaluation might focus on personal performance and actual lines of code.
  • Team efforts include complicated processes with a lot of duplicate work. For example, members spend time repeatedly writing working logs and daily reports.
  • Low tolerance for mistakes. When the cost of innovation is a high risk to an individual's job, it doesn't happen.
  • Frequent changes in team members.
  • The team's manager may lack management skills.

Ideally, changes would be made to the organizational policy to help team members engage and participate. In the absence of changes in regulations, conduct interviews with team members to address the problem directly.

Many people believe that team participation can be gained by building trust. That's true, but trust without organizational policy is meaningless because only the larger organization can ensure trust between team members. In other words, systems and regulations are crucial pillars of trust.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Poor-quality user stories

This is the biggest problem in development. Poor-quality user stories are manifested by development errors, lots of rework, redundant confirmation, duplicate modifications, and other wastes of development resources. Worse, it's one of the greatest causes of overtime.

Possible reasons for poor-quality stories:

  • The client didn't express their requirements clearly or propose solutions directly.
  • The project wasn't clearly defined, leading to capacity and sometimes attitude issues.
  • The delivered product doesn't solve the client's problems (and even results in complaints in review meetings).
  • The stories are unstable and change frequently.

Here's how I address issues of poor quality stories:

  • Use visual aids, such as prototype diagrams, sketchnotes, storyboards, and so on.
  • Reinforce lessons from business analysis for the product team. Focus on story confirmation procedures and review to ensure every story is correct.
  • Establish writing standards for stories and requirements. (Don't assume that Agile doesn't require standards!)
  • Be brave enough to say no to unreasonable stories.
Wrap up

Establishing an Agile way of thinking in an existing company is a big task with plenty of potential pitfalls. However, some problems are more prevalent than others and tend to span organizations. I've identified the four most common issues I've encountered. Whether it's lack of awareness, support, participation, or poor user stories, there are certain strategies that make handling these problems more manageable. How can you implement these approaches to help smooth the way for great Agile success?

Whether it's lack of awareness, support, participation, or poor user stories, there are certain strategies that make handling these problems more manageable.

Image by:

opensource.com

Agile What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My practical advice for new programmers

Tue, 08/16/2022 - 15:00
My practical advice for new programmers Sachin Samal Tue, 08/16/2022 - 03:00 1 reader likes this 1 reader likes this

Have you ever been stuck or gone blank trying to solve a problem related to something that you just learned from YouTube or Google tutorials? You seem to understand every line of the code, but without the tutorial, you find yourself in a difficult position. If you have looked at problem-solving in HackerRank or LeetCode, you can relate to how an aspiring programmer feels seeing those challenges for the first time. Everything seems out of the box! Being unable to apply what you learned from a tutorial might make you doubt your knowledge and abilities as you begin to understand the basics of the programming language you're learning.

Putting programming tutorials into practice

Should you start back at the beginning? If you do that, you may soon find that you've covered those topics more than enough times. Starting from scratch is not necessarily a waste, but how can you be more efficient?

Memorization is simply not the solution in programming. Having said that, you cannot neglect the importance of getting used to syntaxes. There is a significant difference between memorizing and making a habit. The latter is difficult to break. Make a habit of playing around with the programming language's regular syntaxes, functions, methods, patterns, paradigms, and constructs to ace it. Acing a programming language involves a lot of creativity and practice. It is essential to practice syntaxes until they flow as smoothly in your brain as the blood runs through your veins.

How problem-solving works

How you approach solutions depends on many factors. These factors could be anything from technical constraints to user needs. The world has innumerable problems and there are many ways of solving each. Deciding on the best way involves extensive problem-solving skills.

Here is a simple example. You need to achieve a result of 6 by adding two numbers. You can accomplish this several ways:

3+3=6 or 4+2=6 or 5+1=6

Similarly, say you need to achieve a result of 6 by using two numbers and either subtraction, division, or multiplication. You have many options, including:

8-2=6 or 12/2=6 or 3*2=6

Each solution may have a different constraint. You must consider all of these when developing effective real-world solutions. Is the solution feasible? Accessible? Interoperable? Scalable? Minimizing the constraint and developing an optimal solution depends on the business need and type of problem.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Practice matters

The goal of programming is much more than problem-solving. Understanding how the code functions from an engineering perspective is always an advantage. That's where code reviews come into play at an enterprise level. The bare minimum requirement in programming is to have basic coding knowledge, including the language's syntaxes, functions, and methods. At the end of the day, coding is what you do, so practicing always helps improve your skills. Fluency in writing and developing complex solutions comes with consistent practice and learning.

Learning to code

My goal for writing and sharing this article is to encourage new programmers to seek the great problem solver in themselves. Please don't stop believing in yourself.

There are many habits to nurture for successful coding. Here are my ways of staying effective while learning to code:

  1. A cheat sheet of syntaxes, methods, and functions can come in handy.
  2. Break problems into smaller parts to make them easier to follow.
  3. Try to understand the core concept of how code functions.
  4. Try to improvise with your solutions but always stick to the basics in the beginning.
  5. Create as many applications and components as possible while practicing.
  6. Never copy/paste code from open platforms like stack overflow/exchange, especially without understanding the context.
  7. After following the tutorial, try to build everything from scratch. If you manage to accomplish half of it by yourself, that's still an achievement.

Good luck to all of us.

Being an efficient and curious problem-solver will help you succeed as a programmer.

Image by:

Opensource.com

Programming What to read next 5 ways to learn C programming on Linux Why I'm enjoying learning Rust as a Java programmer Learn JavaScript by writing a guessing game This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A look inside an EPUB file

Tue, 08/16/2022 - 15:00
A look inside an EPUB file Jim Hall Tue, 08/16/2022 - 03:00 Register or Login to like Register or Login to like

eBooks provide a great way to read books, magazines, and other content on the go. Readers can enjoy eBooks to pass the time during long flights and train rides. The most popular eBook file format is the EPUB file, short for "electronic publication." EPUB files are supported across a variety of eReaders and are effectively the standard for eBook publication today.

The EPUB file format is an open standard based on XHTML for content and XML for metadata, contained in a zip file archive. And because everything is based on open standards, we can use common tools to create or examine EPUB files. Let's explore an EPUB file to learn more about it. A guide to tips and tricks for C programming, published earlier this year on Opensource.com, is available in PDF or EPUB format.

Because EPUB files are XHTML content and XML metadata in a zip file, you can start with the unzip command to examine the EPUB from the command line:

$ unzip -l osdc_Jim-Hall_C-Programming-Tips.epub
Archive: osdc_Jim-Hall_C-Programming-Tips.epub
Length Date Time Name
--------- ---------- ----- ----
20 06-23-2022 00:20 mimetype
8259 06-23-2022 00:20 OEBPS/styles/stylesheet.css
1659 06-23-2022 00:20 OEBPS/toc.xhtml
4460 06-23-2022 00:20 OEBPS/content.opf
44157 06-23-2022 00:20 OEBPS/sections/section0018.xhtml
1242 06-23-2022 00:20 OEBPS/sections/section0002.xhtml
22429 06-23-2022 00:20 OEBPS/sections/section0008.xhtml
[...]
9628 06-23-2022 00:20 OEBPS/sections/section0016.xhtml
748 06-23-2022 00:20 OEBPS/sections/section0001.xhtml
3370 06-23-2022 00:20 OEBPS/toc.ncx
8308 06-23-2022 00:21 OEBPS/images/image0011.png
6598 06-23-2022 00:21 OEBPS/images/image0009.png
[...]
14492 06-23-2022 00:21 OEBPS/images/image0005.png
239 06-23-2022 00:20 META-INF/container.xml
--------- -------
959201 41 files

This EPUB contains a lot of files, but much of this is content. To understand how an EPUB file is put together, follow the process flow of an eBook reader:

  1. eBook readers need to verify that the EPUB file is really an EPUB file. They verify the file by examining the mimetype file at the root of the EPUB archive. This file contains just one line that describes the MIME type of the EPUB file:

    application/epub+zip
  2. To locate the content, eBook readers start with the META-INF/container.xml file. This is a brief XML document that indicates where to find the content. For this EPUB file, the container.xml file looks like this:

    <?xml version="1.0" encoding="UTF-8"?>

    To make the container.xml file easier to read, I split the single line into multiple lines and added some spacing to indent each line. XML files don't really care about extra white space like new lines and spaces, so this extra spacing doesn't affect the XML file.

  3. The container.xml file says the root of the EPUB starts with the content.opf file in the OEBPS directory. The OPF extension is because EPUB is based on the Open Packaging Format, but the content.opf file is really just another XML file.

  4. The content.opf file contains a complete manifest of the EPUB contents, plus an ordered table of contents, with references to find each chapter or section. The content.opf file for this EPUB is quite long, so I'll show just a bit of it here as an example.

    The XML data is contained within a block, which itself has a block, the data, and a block that contains the eBook's table of contents:

    <?xml version="1.0" encoding="UTF-8"?>
    unique-identifier="unique-identifier" version="3.0" xmlns="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:opf="http://www.idpf.org/2007/opf">
    >
    id="unique-identifier">osdc002>
    >Tips and Tricks for C Programming>
    >Jim Hall>
    >English>
    property="dcterms:modified">2022-06-23T12:09:13Z>
    content="LibreOffice/7.3.0.3$Linux_X86_64 LibreOffice_project/0f246aa12d0eee4a0f7adcefbf7c878fc2238db3 (libepubgen/0.1.1)" name="generator"/>
    >
    >
    ...
    href="sections/section0001.xhtml" id="section0001" media-type="application/xhtml+xml"/>
    href="images/image0003.png" id="image0003" media-type="image/png"/>
    href="styles/stylesheet.css" id="stylesheet.css" media-type="text/css"/>
    href="toc.ncx" id="toc.ncx" media-type="application/x-dtbncx+xml"/>
    ...
    >
    toc="toc.ncx">
    idref="section0001"/>
    idref="section0002"/>
    idref="section0003"/>
    ...
    >
    >

    You can match up the data to see where to find each section. That’s how EPUB readers do it. For example, the first item in the table of contents references section0001 which is defined in the manifest as located in the sections/section0001.xhtml file. The file doesn’t need to be named the same as the idref entry, but that’s how LibreOffice Writer’s automated process created the file. (You can see in the metadata that this EPUB was created with LibreOffice version 7.3.0.3 on Linux, which can export content as EPUB files.)

The EPUB format

EPUB files are a great way to publish content using an open format. The EPUB file format is XML metadata with XHTML content, inside a zip container. While most technical writers use tools to create EPUB files, because EPUB is based on open standards means you can create your own EPUB files in some other way.

EPUB files are a great way to publish content using an open format.

Image by:

Lewis Cowles, CC BY-SA 4.0

Linux Documentation What to read next How I use the Linux fmt command to format text How I use the Linux sed command to automate file edits Old-school technical writing with groff Create beautiful PDFs in LaTeX A gentle introduction to HTML Writing project documentation in HTML Level up your HTML document with CSS How ODT files are structured This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Try Asciidoc instead of Markdown

Mon, 08/15/2022 - 15:00
Try Asciidoc instead of Markdown Seth Kenlon Mon, 08/15/2022 - 03:00 Register or Login to like Register or Login to like

I'm a happy user of the XML-based Docbook markup language. To me, it's a precise, explicit, and detailed system that allows me to have contextual and domain-specific metadata in what I write. Best of all, though, it can be transformed (that's what XML users call it when XML is converted into another format) into nearly any format, including HTML, EPUB, FO for PDF, plain text, and more. With great power comes a lot of typing, though, and sometimes Docbook feels like it's surplus to requirements. Luckily, there's Asciidoc, a system of writing plain text with the same markup-less feel of Markdown, but that transforms to Docbook to take advantage of its precision and flexibility.

Asciidoc rules

Like Markdown, one of the goals of Asciidoc is that you don't really have to learn it. Instead, it aims to be intuitive and natural. You may well have written snippets of valid Asciidoc without realizing it if you've ever added a little style to a plain text document for readability. For instance, if you habitually separate paragraphs with a blank line, then you've written the equivalent of the HTML

or Docbook tag. It seems obvious, and yet in academia separating paragraphs with blank lines isn't generally done, so even this simple convention is technically markup.

Here's the most common syntax.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Text styles

Text styles include the basics such as bold, italics, and code font. Most of the notation is relatively intuitive, with the possible exception of italics.

*Bold*

_Italics_

*_Bold and italic_*

`Monospace or code`

Code

Code is marked with backticks or by explicit declaration of a code block.

`Monospace or code`

[source,python]
----
print('a whole code block')
----Headlines

Headings are marked with leading equal signs (=):

= Heading 1 ()

== Heading 2 ()

=== Heading 3 ()

==== Heading 4 ()

===== Heading 5 ()

====== Heading 6 ()

Links

Hyperlinks favor the link first, followed by the word or phrase used to "disguise" the link as text.

This is a http://example.com[hyperlink] that leads to the example.com site.

I don't find this as elegant as Markdown's link notation, but then it's a lot more flexible. For instance, you can add attributes in Asciidoc links:

This is a https://example.com[link,role=external,window=_blank] with the target="_blank" attribute set.

Lots more

Asciidoc also features internal links so you can link from one section to another, a standard for document headers, automatic table of content generation, the ability to include other documents within another, and much much more.

But best of all, Asciidoc is actually standardized. Not everyone knows it, but the term "Markdown" doesn't refer to one markup-light language. Different organizations and groups regularly customize and alter Markdown for their own use, so when you use Markdown you really ought to verify which markdown you're meant to use. Many of the conventions you might have learned from one website using Markdown don't carry over to another site using Markdown. There's essentially no standard for Markdown, and that's resulted in such confusion that the Commonmark.org project has been formed in an attempt to assemble a standardized definition.

Asciidoc was designed from the start with a standard definition, so the tool or website that claims to parse Asciidoc actually does parse all valid Asciidoc, because there's only one valid Asciidoc.

Asciidoc to anything

The point of writing in a markup-light language like Asciidoc is to ensure predictability and consistency when text is parsed. You want a person to write a script, or to run an application someone else has written, to be able to transform your plain text into whatever format works best for them. Sometimes that's HTML (incidentally Markdown's native output format, and fallback language when there's something missing from its own syntax.) Other times it's an EPUB, or a PDF for printing, Docbook, a LibreOffice document, or any number of possible output formats.

There are several tools to help you transform Asciidoc to another format. A popular command is Asciidoctor, which you can install using your package manager. For instance, on Fedora, CentOS, or RHEL:

$ sudo dnf install asciidoctor

On Debian-based systems:

$ sudo apt install asciidoctor

Alternately, you can install it on any OS with Ruby:

$ gem install asciidoctor

Here's a simple example of an Asciidoc document, which you can create using any text editor or even a word processor (like LibreOffice) as long as you save the file as plain text. Most applications expect a plain text document to use the extension .txt, and while it's a convention use the extension .adoc for Asciidoc, it's not necessary. Asciidoctor doesn't require any special extension.

= This is my example document

It's not written in _Markdown_, nor _reStructured Text_.
This is *Asciidoc*.

It can be transformed into nearly any format using the tool `Asciidoctor` and other similar parsers.
Try it for yourself!

To transform an Asciidoc document to HTML, run asciidoctor:

$ asciidoctor example.adoc

The file example.adoc is transformed into HTML5 by default, but you can use different backends to gain access to more formats.

From Asciidoc to XML

My favourite is the Docbook backend, because it transforms my Asciidoc to Docbook XML, allowing me to use my existing Docbook toolchain (custom Makefiles, Apache FOP, xsltproc, xmlto, and so on) to complete my work:

$ asciidoctor --backend docbook5 example.adoc

This outputs Docbook XML. The final two built-in backends are xhtml5 and manpage.

From Asciidoc to EPUB

If you want to turn your writing into an ebook, you can install the EPUB3 backend:

$ gem install asciidoctor-epub3

Transform your Asciidoc into EPUB:

$ asciidoctor-epub3 example.adocFrom Asciidoc to PDF

You can transform Asciidoc directly to PDF, too:

$ gem install asciidoctor-pdf
$ asciidoctor-pdf example.adoc Image by:

(Seth Kenlon, CC BY-SA 4.0)

Who should use Asciidoc

Asciidoc is excellent for technical writers and writers who have precise requirements for how they want text to be organized and parsed. It's a clear and strictly defined markup format that eliminates the confusion of competing Markdown formats, and it transforms to all the major formats. Asciidoc is admittedly more verbose and possibly less intuitive than Markdown, but it's still just plain text so you can author on anything, and Asciidoctor makes processing easy. Next time you write a document for any purpose, consider trying Asciidoc.

Next time you write, use Asciidoc and Asciidoctor as alternatives to Markdown.

Linux Documentation What to read next Markdown beginner's cheat sheet This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How ODT files are structured

Mon, 08/15/2022 - 15:00
How ODT files are structured Jim Hall Mon, 08/15/2022 - 03:00 Register or Login to like Register or Login to like

Word processing files used to be closed, proprietary formats. In some older word processors, the document file was essentially a memory dump from the word processor. While this made for faster loading of the document into the word processor, it also made the document file format an opaque mess.

Around 2005, the Organization for the Advancement of Structured Information Standards (OASIS) group defined an open format for office documents of all types, the Open Document Format for Office Applications (ODF). You may also see ODF referred to as simply "OpenDocument Format" because it is an open standard based on the OpenOffice.org's XML file specification. ODF includes several file types, including ODT for OpenDocument Text documents. There's a lot to explore in an ODT file, and it starts with a zip file.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Zip structure

Like all ODF files, ODT is actually an XML document and other files wrapped in a zip file container. Using zip means files take less room on disk, but it also means you can use standard zip tools to examine an ODF file.

I have an article about IT leadership called "Nibbled to death by ducks" that I saved as an ODT file. Since this is an ODF file, which is a zip file container, you can use unzip from the command line to examine it:

$ unzip -l 'Nibbled to death by ducks.odt'
Archive: Nibbled to death by ducks.odt
Length Date Time Name
39 07-15-2022 22:18 mimetype
12713 07-15-2022 22:18 Thumbnails/thumbnail.png
915001 07-15-2022 22:18 Pictures/10000201000004500000026DBF6636B0B9352031.png
10879 07-15-2022 22:18 content.xml
20048 07-15-2022 22:18 styles.xml
9576 07-15-2022 22:18 settings.xml
757 07-15-2022 22:18 meta.xml
260 07-15-2022 22:18 manifest.rdf
0 07-15-2022 22:18 Configurations2/accelerator/
0 07-15-2022 22:18 Configurations2/toolpanel/
0 07-15-2022 22:18 Configurations2/statusbar/
0 07-15-2022 22:18 Configurations2/progressbar/
0 07-15-2022 22:18 Configurations2/toolbar/
0 07-15-2022 22:18 Configurations2/popupmenu/
0 07-15-2022 22:18 Configurations2/floater/
0 07-15-2022 22:18 Configurations2/menubar/
1192 07-15-2022 22:18 META-INF/manifest.xml
970465 17 files

I want to highlight a few elements of the zip file structure:

  1. The mimetype file contains a single line that defines the ODF document. Programs that process ODT files, such as a word processor, can use this file to verify the MIME type of the document. For an ODT file, this should always be:
application/vnd.oasis.opendocument.text
  1. The META-INF directory has a single manifest.xml file in it. This file contains all the information about where to find other components of the ODT file. Any program that reads ODT files starts with this file to locate everything else. For example, the manifest.xml file for my ODT document contains this line that defines where to find the main content:
<manifest:file-entry manifest:full-path="content.xml" manifest:media-type="text/xml"/>
  1. The content.xml file contains the actual content of the document.

  2. My document includes a single screenshot, which is contained in the Pictures directory.

Extracting files from an ODT file

Because the ODT document is just a zip file with a specific structure to it, you can extract files from it. You can start by unzipping the entire ODT file, such as with this unzip command:

$ unzip -q 'Nibbled to death by ducks.odt' -d Nibbled

A colleague recently asked for a copy of the image that I included in my article. I was able to locate the exact location of any embedded image by looking in the META-INF/manifest.xml file. The grep command can display any lines that describe an image:

$ cd Nibbled
$ grep image META-INF/manifest.xml
<manifest:file-entry manifest:full-path="Thumbnails/thumbnail.png" manifest:media-type="image/png"/>
<manifest:file-entry manifest:full-path="Pictures/10000201000004500000026DBF6636B0B9352031.png" manifest:media-type=" image/png”/>

The image I'm looking for is saved in the Pictures folder. You can verify that by listing the contents of the directory:

$ ls -F
Configurations2/ manifest.rdf meta.xml Pictures/ styles.xml
content.xml META-INF/ mimetype settings.xml Thumbnails/

And here it is:

Image by:

(Jim Hall, CC BY-SA 40)

OpenDocument Format

OpenDocument Format (ODF) files are an open file format that can describe word processing files (ODT), spreadsheet files (ODS), presentations (ODP), and other file types. Because ODF files are based on open standards, you can use other tools to examine them and even extract data from them. You just need to know where to start. All ODF files start with the META-INF/manifest.xml file, which is the "root" or "bootstrap" file for the rest of the ODF file format. Once you know where to look, you can find the rest of the content.

Because OpenDocument Format (ODF) are based on open standards, you can use other tools to examine them and even extract data from them. You just need to know where to start.

Image by:

Jonas Leupe on Unsplash

Linux Documentation What to read next How I use the Linux fmt command to format text How I use the Linux sed command to automate file edits Old-school technical writing with groff Create beautiful PDFs in LaTeX A gentle introduction to HTML Writing project documentation in HTML Level up your HTML document with CSS This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Level up your HTML document with CSS

Sat, 08/13/2022 - 15:00
Level up your HTML document with CSS Jim Hall Sat, 08/13/2022 - 03:00 Register or Login to like Register or Login to like

When you write documentation, whether that's for an open source project or a technical writing project, you should have two goals: The document should be written well, and the document should be easy to read. The first is addressed by clear writing skills and technical editing. The second can be addressed with a few simple changes to an HTML document.

HyperText Markup Language, or HTML, is the backbone of the internet. Since the dawn of the "World Wide Web" in 1994, every web browser uses HTML to display documents and websites. And for almost as long, HTML has supported the stylesheet, a special addition to an HTML document that defines how the text should appear on the screen.

You can write project documentation in plain HTML, and that gets the job done. However, plain HTML styling may feel a little spartan. Instead, try adding a few simple styles to an HTML document to add a little pizzazz to documentation, and make your documents clearer and easier to read.

Defining an HTML document

Let's start with a plain HTML document and explore how to add styles to it. An empty HTML document contains the definition at the top, followed by an block to define the document itself. Within the element, you also need to include a document header that contains metadata about the document, such as its title. The contents of the document body go inside a block within the parent block.

You can define a blank page with this HTML code:


<html>
  <head>
    <title>This is a new document</title>
  </head>
  <body>

  </body>
</html>

In another article about Writing project documentation in HTML, I updated a Readme file from an open source board game from plain text to an HTML document, using a few basic HTML tags like and for heading and subheadings,

for paragraphs, and and for bold and italic text. Let's pick up where we left off with that article:


<html>
  <head>
    <title>Simple Senet</title>
  </head>
  <body>
    <h1>Simple Senet</h1>
    <h2>How to Play</h2>
   
    <p>The game will automatically "throw" the throwing sticks
    for you, and display the results in the lower-right corner
    of the screen.</p>
   
    <p>If the "throw" is zero, then you lose your turn.</p>
   
    <p>When it's your turn, the game will automatically select
    your first piece on the board. You may or may not be
    able to make a move with this piece, so select your piece
    to move, and hit <i>Space</i> (or <i>Enter</i>) to move
    it. You can select using several different methods:</p>
   
    <ul>
      <li><i>Up</i>/<i>down</i>/<i>left</i>/<i>right</i> to
      navigate to a specific square.</li>
   
      <li>Plus (<b>+</b>) or minus (<b>-</b>) to navigate "left"
      and "right" on the board. Note that this will automatically
      follow the "backwards S" shape of the board.</li>
   
      <li><em>Tab</em> to select your next piece on the
      board.</li>
    </ul>
   
    <p>To quit the game at any time, press <b>Q</b> (uppercase
    Q) or hit <i>Esc</i>, and the game will prompt if you want
    to forfeit the game.</p>
   
    <p>You win if you move all of your pieces off the board
    before your opponent. It takes a combination of luck and
    strategy!</p>
  </body>
</html>

This HTML document demonstrates a few common block and inline elements used by technical writers who write with HTML. Block elements define a rectangle around text. Paragraphs and headings are examples of block elements, because they extend from the left to the right edges of the document. For example,

encloses an invisible rectangle around a paragraph. In contrast, inline elements follow the text where they are used. If you use on some text within a paragraph, only the text surrounded by and becomes bold.

You can apply direct styling to this document to change the font, colors, and other text styles, but a more efficient way to modify the document's appearance is to apply a stylesheet to the document itself. You can do that in the element, with other metadata. You can reference a file for the style sheet, but for this example, use a block to define a style sheet within the document. Here's the with an empty stylesheet:


<html>
  <head>
    <title>Simple Senet</title>
    <style>

    </style>
  </head>
  <body>
    ...
  </body>
</html>Defining styles

Since you're just starting to learn about stylesheets, let's demonstrate a basic style: background color. I like to start with the background color because it helps to demonstrate block and inline elements. Let's apply a somewhat gaudy stylesheet that sets a light blue background color for all

paragraphs, and a light green background for the

    unordered list. Use a yellow background for any bold text, and a pink background for any italics text.

    You define these using styles in the block of our HTML document. The stylesheet uses a different markup than an HTML document. The style syntax looks like element { style; style; style; ... } and uses curly braces to group together several text styles into a single definition.

        <style>
    p { background-color: lightblue; }
    ul { background-color: lightgreen; }

    b { background-color: yellow; }
    i { background-color: pink; }
        </style>

    Note that each style ends with a semicolon.

    If you view this HTML document in a web browser, you can see how the

    and

      block elements are filled in as rectangles, and the and inline elements highlight only the bold and italics text. This use of contrasting colors may not be pretty to look at, but I think you can see the block and inline elements:

      Image by:

      (Jim Hall, CC BY-SA 4.0)

      More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Applying styles

      You can use styles to make this Readme document easier to read. You're just starting to learn about styles, you'll stick to a few simple style elements:

      • background-color to set the background color
      • color to set the text color
      • font-family to use a different text font
      • margin-top to add space above an element
      • margin-bottom to add space below an element
      • text-align to change how the text is displayed, such as to the left, to the right, or centered

      Let's start over with your stylesheet and apply these new styles to your document. To begin, use a more pleasing font for your document. If your HTML document does not specify a font, the web browser picks one for you. Depending on how the browser is set up, this could be a serif font, like the font used in my screenshot, or a sans-serif font. Serif fonts have a small stroke added to each letter, which makes these fonts much easier to read in print. Sans-serif fonts lack this extra stroke, which makes text appear sharper on a computer display. Common serif fonts include Garamond or Times New Roman. Popular sans-serif fonts include Roboto and Arial.

      For example, to set the document body font to Roboto, use this style:

      body { font-family: Roboto; }

      By setting a font, you assume the person viewing your document also has that font installed. Some fonts have become so common they are considered de facto "Web safe" fonts. These include sans-serif fonts like Arial and serif fonts such as Times New Roman. Roboto is a newer font and may not be available everywhere. So instead of listing just one font, web designers usually put one or more "backup" fonts. You can add these alternative fonts by separating them with a comma. For example, if the user doesn't have the Roboto font on their system, you can instead use Arial for the text body by using this style definition:

      body { font-family: Roboto, Arial; }

      All web browsers define a default serif and sans-serif font that you can reference with those names. Users can change which font they prefer to use for serif and sans-serif, so aren't likely to be the same for everyone, but using serif or sans-serif in a font list is usually a good idea. By adding that font, at least the user gets some approximation of how you want the HTML document to appear:

      body { font-family: Roboto, Arial, sans-serif; }

      If your font name is more than one word, you have to put quotes around it. HTML allows you to use either single quotes or double quotes here. Define a few serif fonts for the heading and subheading, including Times New Roman:

      h1 { font-family: "Times New Roman", Garamond, serif; }
      h2 { font-family: "Times New Roman", Garamond, serif; }

      Note that the h1 heading and h2 subheading use exactly the same font definition. If you want to avoid the extra typing, you can use a stylesheet shortcut to use the same style definition for both h1 and h2:

      h1, h2 { font-family: "Times New Roman", Garamond, serif; }

      When writing documentation, many technical writers prefer to center the main title on the page. You can use text-align on a block element, such as the h1 heading, to center just the title:

      h1 { text-align: center; }

      To help bold and italics text to stand out, put them in a slightly different color. For certain documents, I might use dark blue for bold text, and dark green for italics text. These are pretty close to black, but with just enough subtle difference that the color grabs the reader's attention.

      b { color: darkblue; }
      i { color: darkgreen; }

      Finally, I prefer to add extra spacing around my list elements, to make these easier to read. If each list item was only a few words, the extra space might not matter. But the middle item  in my example text is quite long and wraps to a second line. The extra space helps the reader see each item in this list more clearly. You can use the margin style to add space above and below a block element:

      li { margin-top: 10px; margin-bottom: 10px; }

      This style defines a distance, which I've indicated here as 10px (ten pixels) above and below each list element. You can use several different measures for distance. Ten pixels is literally the space of ten pixels on your screen, whether that's a desktop monitor, a laptop display, or a phone or tablet screen.

      Assuming you really just want to add an extra blank line between the list elements, you can also use em for my measurement. An em is an old typesetting term that is exactly the width of capital M if you refer to left and right spacing, or the height of a capital M for vertical spacing. So you can instead write the margin style using 1em:

      li { margin-top: 1em; margin-bottom: 1em; }

      The complete list of styles in your HTML document looks like this:


      <html>
        <head>
          <title>Simple Senet</title>
          <style>
      body { font-family: Roboto, Arial, sans-serif; }
      h1, h2 { font-family: "Times New Roman", Garamond, serif; }
      h1 { text-align: center; }
      b { color: darkblue; }
      i { color: darkgreen; }
      li { margin-top: 1em; margin-bottom: 1em; }
          </style>
        </head>
        <body>
          <h1>Simple Senet</h1>
          <h2>How to Play</h2>
         
          <p>The game will automatically "throw" the throwing sticks
          for you, and display the results in the lower-right corner
          of the screen.</p>
         
          <p>If the "throw" is zero, then you lose your turn.</p>
         
          <p>When it's your turn, the game will automatically select
          your first piece on the board. You may or may not be
          able to make a move with this piece, so select your piece
          to move, and hit <i>Space</i> (or <i>Enter</i>) to move
          it. You can select using several different methods:</p>
         
          <ul>
            <li><i>Up</i>/<i>down</i>/<i>left</i>/<i>right</i> to
            navigate to a specific square.</li>
         
            <li>Plus (<b>+</b>) or minus (<b>-</b>) to navigate "left"
            and "right" on the board. Note that this will automatically
            follow the "backwards S" shape of the board.</li>
         
            <li><em>Tab</em> to select your next piece on the
            board.</li>
          </ul>
         
          <p>To quit the game at any time, press <b>Q</b> (uppercase
          Q) or hit <i>Esc</i>, and the game will prompt if you want
          to forfeit the game.</p>
         
          <p>You win if you move all of your pieces off the board
          before your opponent. It takes a combination of luck and
          strategy!</p>
        </body>
      </html>

      When viewed on a web browser, you see your Readme document in a sans-serif font, with serif fonts for the heading and subheading. The page title is centered. The bold and italics text use a slightly different color that calls the reader's attention without being distracting. Finally, your list items have extra space around them, making each item easier to read.

      Image by:

      (Jim Hall, CC BY-SA 4.0)

      This is a simple introduction to using styles in technical writing. Having mastered the basics, you might be interested in Mozilla's HTML Guide. This includes some great beginner's tutorials so you can learn how to create your own web pages.

      For more information on how CSS styling works, I recommend Mozilla's CSS Guide.

      Use CSS to bring style to your HTML project documentation.

      Image by:

      Opensource.com

      Documentation Linux Read more from this technical writing series How I use the Linux fmt command to format text How I use the Linux sed command to automate file edits Old-school technical writing with groff Create beautiful PDFs in LaTeX A gentle introduction to HTML Writing project documentation in HTML This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Writing project documentation in HTML

Fri, 08/12/2022 - 15:00
Writing project documentation in HTML Jim Hall Fri, 08/12/2022 - 03:00 Register or Login to like Register or Login to like

Documentation is an important part of any technical project. Good documentation tells the end user how to run the program, how to use it, or how to compile it. For many projects, plain text documentation is the standard. After all, every system can display plain text files.

However, plain text is limiting. Plain text files lack formatting elements like italics text, bold text, and titles. To add these elements, we can leverage HTML. HyperText Markup Language (HTML) is the markup language used in all web browsers. And with a little extra effort, you can use HTML to write project documentation that can be read by everyone.

HTML uses a series of tags enclosed in angle brackets to control how different parts of a document should be displayed. These tags define elements in an HTML document, such as document headings, paragraphs, italics text, bold text, and other kinds of text. Almost every tag comes in a pair: an opening tag, like

to start a paragraph, and a closing tag to end the element, such as

to end a paragraph. When using these tags, remember this rule: if you open a tag, you need to close it. Not closing a tag properly can result in the web browser incorrectly.

Some tags define a block within the HTML document, while others are inline. For more information about block and inline elements, read my other article about a gentle introduction to HTML.

Start an empty document

Begin by creating a boilerplate empty HTML document. Every HTML document should provide a document type declaration. Use the single tag on the first line of the HTML file to define an HTML document. The HTML standard also requires that pages wrap the document text in two block elements: to define the HTML document, and to define the body text. While HTML doesn't require indenting each new code block, but I add it anyway so you can see that is actually "inside" the block:


<html>
  <body>
 
  </body>
</html>

HTML documents also need a block before the that provides extra information called metadata about the page. The only required metadata is the title of the document, defined by the element. An empty document might look like this:


<html>
  <head>
    <title>Title of the document</title>
  </head>
  <body>
 
  </body>
</html>Add the text

Let's exercise some HTML knowledge by adapting an existing plain text "Readme" file to HTML. For this example, I'm using part of the documentation about how to play an open source board game, called Simple Senet:

HOW TO PLAY SIMPLE SENET

The game will automatically "throw" the throwing sticks for you, and
display the results in the lower-right corner of the screen.

If the "throw" is zero, then you lose your turn.

When it's your turn, the game will automatically select your first
piece on the board. You may or may not be able to make a move with
this piece, so select your piece to move, and hit Space (or Enter) to
move it. You can select using several different methods:

-  Up/down/left/right to navigate to a specific square.

-  Plus (+) or minus (-) to navigate "left" and "right" on the
   board. Note that this will automatically follow the "backwards S"
   shape of the board.

-  Tab to select your next piece on the board.

To quit the game at any time, press Q (uppercase Q) or hit Esc, and
the game will prompt if you want to forfeit the game.

You win if you move all of your pieces off the board before your
opponent. It takes a combination of luck and strategy!

Start by adding this Readme text into your empty HTML file. The main content of an HTML page is the , so that's where you put the text:


<html>
  <head>
    <title>Title of the document</title>
  </head>
  <body>
    HOW TO PLAY SIMPLE SENET
   
    The game will automatically "throw" the throwing sticks for you, and
    display the results in the lower-right corner of the screen.
   
    If the "throw" is zero, then you lose your turn.
   
    When it's your turn, the game will automatically select your first
    piece on the board. You may or may not be able to make a move with
    this piece, so select your piece to move, and hit Space (or Enter) to
    move it. You can select using several different methods:
   
    - Up/down/left/right to navigate to a specific square.
   
    - Plus (+) or minus (-) to navigate "left" and "right" on the
      board. Note that this will automatically follow the "backwards S"
      shape of the board.
   
    - Tab to select your next piece on the board.
   
    To quit the game at any time, press Q (uppercase Q) or hit Esc, and
    the game will prompt if you want to forfeit the game.
   
    You win if you move all of your pieces off the board before your
    opponent. It takes a combination of luck and strategy!
  </body>
</html>

Without further changes, this HTML document looks completely wrong when you view it in a web browser. That's because HTML, like most markup systems, collects words from the input file and fills paragraphs in the output. Because you have not yet added other markup, a web browser displays the text in a single paragraph:

Image by:

(Jim Hall, CC BY-SA 4.0)

Body paragraphs

Your first step in updating this Readme file to HTML is to mark every paragraph so the web browser can display it properly. The tag to define a paragraph is

. While not everything in this file is actually a paragraph, start by wrapping everything in

and

tags:


<html>
  <head>
    <title>Title of the document</title>
  </head>
  <body>
    <p>HOW TO PLAY SIMPLE SENET</p>
   
    <p>The game will automatically "throw" the throwing sticks for you, and
    display the results in the lower-right corner of the screen.</p>
   
    <p>If the "throw" is zero, then you lose your turn.</p>
   
    <p>When it's your turn, the game will automatically select your first
    piece on the board. You may or may not be able to make a move with
    this piece, so select your piece to move, and hit Space (or Enter) to
    move it. You can select using several different methods:</p>
   
    <p>- Up/down/left/right to navigate to a specific square.</p>
   
    <p>- Plus (+) or minus (-) to navigate "left" and "right" on the
         board. Note that this will automatically follow the "backwards S"
         shape of the board.</p>
   
    <p>- Tab to select your next piece on the board.</p>
   
    <p>To quit the game at any time, press Q (uppercase Q) or hit Esc, and
    the game will prompt if you want to forfeit the game.</p>
   
    <p>You win if you move all of your pieces off the board before your
    opponent. It takes a combination of luck and strategy!</p>
  </body>
</html>

This makes the Readme look more like a document you want to read. When you view the new document in a web browser, every paragraph starts on a new line, with some extra space above and below. The paragraph is the most common example of a block element.

Image by:

(Jim Hall, CC BY-SA 4.0)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Headings and subheadings

The first line in your content is your document's title, so you should make this into a heading. HTML provides six levels of headings, from to . In most documents, you might use to define the title of the document, and for major subsections. Make this change in your sample Readme document. Use the name of the program ("Simple Senet") as the main section title, and "How to Play" as a subsection in the document.

Note that in this example, I've also updated the in the document metadata to use the same title as the heading. This doesn't actually change how browsers display the document, but it is a good practice to use:


<html>
  <head>
    <title>Simple Senet</title>
  </head>
  <body>
    <h1>Simple Senet</h1>
    <h2>How to Play</h2>
    ...
  </body>
</html>

By adding these section headings, you've made the document easier to read:

Image by:

(Jim Hall, CC BY-SA 4.0)

Ordered and unordered lists

Your document includes a list of different ways to navigate the board game. Because this document started out as a plain text file, each item in the list starts with a hyphen. But you can use HTML to define these three paragraphs as list items.

HTML supports two kinds of lists: ordered and unordered lists. An ordered list

    is a numbered series, which you might use to define a sequence of steps. An unordered list
      defines a list of items that may or may not be related, but are generally not done in order. Both lists use list items
    • for entries within the list.

      Update the Readme document to use an ordered list instead of paragraphs. This presents the three navigation options in a numbered list:

         <ol>
            <li>Up/down/left/right to navigate to a specific square.</li>

            <li>Plus (+) or minus (-) to navigate "left" and "right" on the
                board. Note that this will automatically follow the "backwards S"
                shape of the board.</li>
         
            <li>Tab to select your next piece on the board.</li>
          </ol>

      This presents the three options in a numbered list:

      Image by:

      (Jim Hall, CC BY-SA 4.0)

      However, these three items aren't really a sequence of steps, but different options to move the selection in the Simple Senet game. So instead of an ordered list, we want to use an unordered list. This requires updating the

        to
          in the document:

             <ul>
                <li>Up/down/left/right to navigate to a specific square.</li>

                <li>Plus (+) or minus (-) to navigate "left" and "right" on the
                    board. Note that this will automatically follow the "backwards S"
                    shape of the board.</li>
             
                <li>Tab to select your next piece on the board.</li>
              </ul>

          The unordered list uses bullets for each list item, because the entries are not part of a sequence:

          Image by:

          (Jim Hall, CC BY-SA 4.0)

          Bold and italics

          You can highlight certain information in the document by applying bold and italics styles. These are very common text styles in technical writing. You might use bold to highlight important information, or italics to emphasize key phrases and new terms.

          The bold tag was originally defined as , but newer versions of the HTML standard prefer the tag to indicate strong importance, such as key steps in a set of instructions. Both tags are valid, but are semantically slightly different. now means "bring attention to."

          Similarly, the original HTML standard used for italics text. Later versions of HTML instead prefer to bring emphasis to parts of the text. Instead, now identifies idiomatic text or technical terms.

          For this example, use bold to identify the single-letter keypresses, and italics to indicate special keys on a keyboard like Enter and Space. For simplicity, use and tags here (but you could use and tags instead to get the same effect:)


          <html>
            <head>
              <title>Simple Senet</title>
            </head>
            <body>
              <h1>Simple Senet</h1>
              <h2>How to Play</h2>
             
              <p>The game will automatically "throw" the throwing sticks for you, and
              display the results in the lower-right corner of the screen.</p>
             
              <p>If the "throw" is zero, then you lose your turn.</p>
             
              <p>When it's your turn, the game will automatically select your first
              piece on the board. You may or may not be able to make a move with
              this piece, so select your piece to move, and hit <i>Space</i> (or <i>Enter</i>) to
              move it. You can select using several different methods:</p>

              <ul>
                <li><i>Up</i>/<i>down</i>/<i>left</i>/<i>right</i> to navigate to a specific square.</li>

                <li>Plus (<b>+</b>) or minus (<b>-</b>) to navigate "left" and "right" on the
                    board. Note that this will automatically follow the "backwards S"
                    shape of the board.</li>
             
                <li><em>Tab</em> to select your next piece on the board.</li>
              </ul>

              <p>To quit the game at any time, press <b>Q</b> (uppercase Q) or hit <i>Esc</i>, and
              the game will prompt if you want to forfeit the game.</p>
             
              <p>You win if you move all of your pieces off the board before your
              opponent. It takes a combination of luck and strategy!</p>
            </body>
          </html>

          These extra styles help special items to stand out in the text:

          Image by:

          (Jim Hall, CC BY-SA 4.0)

          The point of writing documentation is for users to understand how to use the software, so every open source project should make the effort to write documentation in a way that is easy to read. With a few basic HTML tags, you can write documentation that presents the information more clearly to your users.

          For more information on using HTML to write documentation, check out the complete HyperText Markup Language reference at MDN, the Mozilla Developer Network, hosted by the Mozilla web project.

          HyperText has more features than plain text to level up your documentation.

          Image by:

          Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

          Programming Linux Documentation What to read next How I use the Linux fmt command to format text How I use the Linux sed command to automate file edits Old-school technical writing with groff Create beautiful PDFs in LaTeX A gentle introduction to HTML This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I get students excited about math with Python and Raspberry Pi

Fri, 08/12/2022 - 15:00
How I get students excited about math with Python and Raspberry Pi Don Watkins Fri, 08/12/2022 - 03:00 Register or Login to like Register or Login to like

I am teaching Python using Raspberry Pi 400 computers in a local library for the second year in a row. A couple of this year's students have not experienced success with mathematics in their school. One asked me if she needed algebra to attend our class. I told her I had failed algebra, geometry, and trigonometry in school. She was relieved. Another student rushed in the door a bit late because she was taking geometry in summer school after failing to pass the course during the school year. I shared my own story of learned helplessness and my distress at the thought of math tests. My own bad experiences impacted my high school and early college years.

I like Python, and in particular, the turtle module, because of an experience in graduate school in the early 1990s. The exercise used Apple's logo to teach students geometry, leading to an epiphany that completely changed my attitude toward mathematics. This week's class has four eighth-grade students. Two have math backgrounds, and two have math phobias. On the first day of class in the Olean Public Library, we started with a brief explanation of the RaspberryPi 400 and how to connect each of those computers to old VGA monitors that came from storage. I gave the students a brief overview and tour of the ports, peripheral mouse, and microHDMI cable we would use. We proceeded, step by step, to assemble the parts of the Raspberry Pi 400 units and connect them to the monitors. We powered up the units, and I assisted the students as they properly configured their computers for the United States and the Eastern Time Zone. We connected to the library's wireless network and were ready to begin.

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

I gave the students a brief overview of all the software on their computers. Then I introduced them to the Mu-Editor that comes pre-installed on their computers. We reviewed the Read-Evaluate-Print-Loop (REPL). I explained that while we could execute code in the REPL, they would find it easier to write the code in the Mu-Editor and then save their code with a .py extension to ensure that the system could execute it properly. I explained how our code needed comments and how to add and save them properly.

# first program print("Hello World")

Then I introduced them to the turtle module. We talked about the elements of a square; that squares are made up of four equal sides and contain 90-degree angles. We wrote the following code together, saved our work, and executed it.

# First Turtle Square import turtle turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90)

I explained how to change the code and add features like a different pen color and a different color background.

# First Turtle Square import turtle turtle.pencolor("blue") turtle.bgcolor("yellow") turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90) turtle.forward(200) turtle.right(90)

I introduced them to the turtle.shape to change from the default to look more like a turtle. I encouraged them to save each time and to iterate. They had fun sharing their results.

In our second session, I demonstrated how to use a for loop to draw a square and how to clean up the code by assigning the "turtle" to a specific letter. Then I ran the code.

#For Loop import turtle as t for x in range(4): t.forward(200) t.right(91)

One of the students who had experienced mathematics problems in the past said, "That square looks crooked."

I said, "You're right. What's wrong with it?"

She let me know that my t.right should be 90 and not 91. I corrected the error and reran the code. It looked perfect, and she was proud to have experienced some success with mathematics.

We changed our code, and I introduced them to new possibilities within the turtle module, including speed, pen color, and background color. They enjoyed it when I demonstrated how we could easily create a square spiral using the following code:

# square spiral import turtle as t t.speed(0) t.bgcolor("blue") t.pencolor("yellow") for x in range(200): t.forward(x) t.right(91)

We changed our code again to make circle spirals. The students were leaning into the learning, and our ninety-minute class came to an end. One of the students is in summer school re-taking geometry which she failed during the school year, and each day she runs a block and a half to make it to our class, where she excels at constructing geometric shapes. She has a great eye for detail and regularly helps the other students identify errors in their code. Her watchful eye inspired me to discuss the value of open source software and the power of many eyes on the code with the group.

Image by:

(Don Watkins, CC BY-SA 4.0)

# circle spiral import turtle as t t.speed(0) t.bgcolor("blue") t.pencolor("yellow") for x in range(100): t.circle(x*2) t.right(91) t.setpos(60,75) for x in range(100): t.circle(x) t.right(91) Image by:

(Don Watkins, CC BY-SA 4.0)

Using Python with open source hardware and software to facilitate mathematics instruction amazes me. With a little ingenuity, it's possible to reimagine mathematics education. Each student who participated in our class will receive the Raspberry Pi 400 they worked on to take home and use. They'll have to find a display to connect to, but for a bit over one hundred dollars per unit, we are investing in their future. You can have the same effect in your community if you are willing to donate your time. Public libraries are great spaces for extracurricular activities, and some of the resources I have used as the basis for my classes come from library books. One of those books is Teach Your Kids to Code. Another is Python for Kids and A Simple Turtle Tutorial by Al Sweigart is available online. We used Raspberry PI 400 kits with VGA monitors and microHDMI to VGA adapters. You could easily adapt this instruction using refurbished Linux laptops, Windows, and/or macOS laptops.

Reimagine math with the help of these open source technologies.

Image by:

Opensource.com

Python Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages