Open-source News

How I gave my old laptop new life with the Linux Xfce desktop

opensource.com - Wed, 06/08/2022 - 15:00
How I gave my old laptop new life with the Linux Xfce desktop Jim Hall Wed, 06/08/2022 - 03:00 1 reader likes this 1 reader likes this

A few weeks ago, I needed to give a conference presentation that included a brief demonstration of a small app I'd written for Linux. I needed a Linux laptop to bring to the conference, so I dug out an old laptop and installed Linux on it. I used the Fedora 36 Xfce spin, which worked great.

The laptop I used was purchased in 2012. The 1.70 GHz CPU, 4 GB memory, and 128 GB drive may seem small compared to my current desktop machine, but Linux and the Xfce desktop gave this old machine new life.

Xfce desktop for Linux

The Xfce desktop is a lightweight desktop that provides a sleek, modern look. The interface is familiar, with a taskbar or “panel” across the top to launch applications, change between virtual desktops, or access notifications in the system tray. The quick access dock at the bottom of the screen lets you launch frequently used applications like the terminal, file manager, and web browser.

Image by:

(Jim Hall, CC BY-SA 40)

To start a new application, click the Applications button in the upper-left corner. This opens a menu of application launchers, with the most frequently used applications like the terminal and file manager at the top. Other applications are organized into groups, so you can navigate to the one you want.

Image by:

(Jim Hall, CC BY-SA 40)

Managing files

Xfce's file manager is called Thunar, and it does a great job of organizing my files. I like that Thunar can also make connections to remote systems. At home, I use a Raspberry Pi using SSH as a personal file server. Thunar lets me open an SSH file transfer window so I can copy files between my laptop and the Raspberry Pi.

Image by:

(Jim Hall, CC BY-SA 40)

Another way to access files and folders is via the quick access dock at the bottom of the screen. Click the folder icon to bring up a menu of common actions such as opening a folder in a terminal window, creating a new folder, or navigating into a specific folder.

Image by:

(Jim Hall, CC BY-SA 40)

Other applications

I loved exploring the other applications provided in Xfce. The Mousepad text editor looks like a simple text editor, but it contains useful features for editing more than just plain text. Mousepad recognizes many file types that programmers and other power users may appreciate. Check out this partial list of programming languages available in the Document menu.

Image by:

(Jim Hall, CC BY-SA 40)

If you prefer a different look and feel, you can adjust the interface options such as font, color scheme, and line numbers using the View menu.

Image by:

(Jim Hall, CC BY-SA 40)

The disk utility lets you manage storage devices. While I didn't need to modify my system disk, the disk tool is a great way to initialize or reformat a USB flash drive. I found the interface very easy to use.

Image by:

(Jim Hall, CC BY-SA 40)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I was also impressed with the Geany integrated development environment. I was a bit surprised that a full IDE ran so well on an older system. Geany advertises itself as a “powerful, stable and lightweight programmer's text editor that provides tons of useful features without bogging down your workflow.” And that's exactly what Geany provided.

I started a simple “hello world” program to test out Geany, and was pleased to see that the IDE popped up syntax help as I typed each function name. The pop-up message is unobtrusive and provides just enough syntax information where I need it. While the printf function is easy for me to remember, I always forget the order of options to other functions like fputs and realloc. This is where I need the pop-up syntax help.

Image by:

(Jim Hall, CC BY-SA 40)

Explore the menus in Xfce to find other applications to make your work easier. You'll find apps to play music, access the terminal, or browse the web.

While I installed Linux to use my laptop for a few demos at a conference, I found Linux and the Xfce desktop made this old laptop feel quite snappy. The system performed so well that when the conference was over, I decided to keep the laptop around as a second machine.

I just love working in Xfce and using the apps. Despite the low overhead and minimal approach, I don't feel underpowered. I can do everything I need to do using Xfce and the included apps. If you have an older machine that needs new life, try installing Linux to bring new life to old hardware.

While I installed Linux to use my laptop for a few demos at a conference, I found Linux and the Xfce desktop made this old laptop feel quite snappy.

Image by:

Jonas Leupe on Unsplash

Linux Hardware What to read next 8 reasons to use the Xfce Linux desktop environment This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Using Ansible to automate software installation on my Mac

opensource.com - Wed, 06/08/2022 - 15:00
Using Ansible to automate software installation on my Mac Servesha Dudhgaonkar Wed, 06/08/2022 - 03:00 Register or Login to like Register or Login to like

On most systems, there are several ways to install software. Which one you use depends on the source of the application you're installing. Some software comes as a downloadable wizard to walk you through an install process, while others are files you just download and run immediately.

On macOS, a whole library of open source applications is available from Unix commands like Homebrew and MacPorts. The advantage of using commands for software installation is that you can automate them, and my favorite tool for automation is Ansible. Combining Ansible with Homebrew is an efficient and reproducible way to install your favorite open source applications.

This article demonstrates how to install one of my must-have writing tools, Asciidoctor, on macOS using Ansible. Asciidoctor is an open source text processor, meaning that it takes text written in a specific format (in this case, Asciidoc) and transforms it into other popular formats (such as HTML, PDF, and so on) for publishing. Ansible is an open source, agentless, and easy-to-understand automation tool. By using Ansible, you can simplify and automate your day-to-day tasks.

Note: While this example uses macOS, the information applies to all kinds of open source software on all platforms compatible with Ansible (including Linux, Windows, Mac, and BSD).

Installing Ansible

You can install Ansible using pip, the Python package manager. First, install pip:

$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ python ./get-pip.py

Next, install Ansible using pip:

$ sudo python -m pip install --user ansibleInstalling Ansible using Homebrew

Alternately, you can install Ansible using the Homebrew package manager. If you've already installed Ansible with pip, skip this step because you've already achieved the same result!

$ brew install ansible

More on Ansible A quickstart guide to Ansible Ansible cheat sheet Free online course: Ansible essentials Download and install Ansible eBook: The automated enterprise eBook: Ansible for DevOps Free Ansible eBooks Latest Ansible articles Configuring Ansible

To set up Ansible, you first must create an inventory file specifying which computer or computers you want your Ansible script (called a playbook) to operate on.

Create an inventory file in a terminal or using your favorite text editor. In a terminal, type the following, replacing your-host-name with the name of your computer:

$ cat << EOF >> inventory
[localhost\]
your-host-name
EOF

If you don't know your computer's hostname, you can get it using the hostname command. Alternately, go to the Apple menu, open System Preferences, then click Sharing. Your computer's hostname is beneath the computer name at the top of Sharing preference pane.

Installing Asciidoctor using Ansible

In this example, I'm only installing applications on the computer I'm working on, which is also known by the term localhost. To start, create a playbook.yml file and copy the following content:

- name: Install software
  hosts: localhost
  become: false
  vars:
    Brew_packages:
      - asciidoctor
    install_homebrew_if_missing: false

In the first YAML sequence, you name the playbook (Install software), provide the target (localhost), and confirm that administrative privileges are not required. You also create two variables that you can use later in the playbook: Brew_packages andinstall_homebrew_if_missing.

Next, create a YAML mapping called pre_tasks, containing the logic to ensure that Homebrew itself is installed on the computer where you're running the playbook. Normally, Ansible can verify whether an application is installed or not, but when that application is the package manager that helps Ansible make that determination in the first place, you have to do it manually: 

pre_tasks:
      - name: Ensuring Homebrew Is Installed
        stat:
          path: /usr/local/bin/brew
        register: homebrew_check

      - name: Fail If Homebrew Is Not Installed and install_homebrew_if_missing Is False
        fail:
          msg: Homebrew is missing, install from http://brew.sh
        when:
         - not homebrew_check.stat.exists
          - not install_homebrew_if_missing

      - name: Installing Homebrew
        shell: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
        when:
         - not homebrew_check.stat.exists
          - install_homebrew_if_missing

Finally, create a YAML mapping called tasks containing a call to the Homebrew module (it's a built-in module from Ansible) to install Asciidoctor in the event that it's not already present:

 tasks:
    - name: Install Asciidoctor
      homebrew:
        name: asciidoctor
        state: presentRunning an Ansible playbook

You run an Ansible playbook using the ansible-playbook command:

$ ansible-playbook -i inventory playbook.yml

The -i option specifies the inventory file you created when setting up Ansible. You can optionally add -vvvv to direct Ansible to be extra verbose when running the playbook, which can be useful when troubleshooting.

After the playbook has run, verify that Ansible has successfully installed Asciidoctor on your host:

$ asciidoctor -v
Asciidoctor X.Y.Z https://asciidoctor.org
 Runtime Environment (ruby 2.6.8p205 (2021-07-07 revision 67951)...Adapt for automation

You can add more software to the Brew_packages variable in this article's example playbook. As long as there's a Homebrew package available, Ansible installs it. Ansible only takes action when required, so you can leave all the packages you install in the playbook, effectively building a manifest of all the packages you have come to expect on your computer.

Should you find yourself on a different computer, perhaps because you're at work or you've purchased a new one, you can quickly install all the same applications in one go. Better still, should you switch to Linux, the Ansible playbook is still valid either by using Homebrew for Linux or by making a few simple updates to switch to a different package manager.

In this demo, I install one of my must-have writing tools, Asciidoctor, on macOS using Ansible.

Image by:

freephotocc via Pixabay CC0

Ansible Mac Automation What to read next Introduction to Homebrew: the painless way to install anything on a Mac This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to Configure SSH Passwordless Authentication on RHEL 9

Tecmint - Wed, 06/08/2022 - 13:02
The post How to Configure SSH Passwordless Authentication on RHEL 9 first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Short for Secure Shell, SSH is a secure network protocol that encrypts traffic between two endpoints. It allows users to securely connect and/or transfer files over a network. SSH is mostly used by network

The post How to Configure SSH Passwordless Authentication on RHEL 9 first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

GNOME's Mutter Variable Rate Refresh Support Closer To Being Merged

Phoronix - Wed, 06/08/2022 - 03:00
Variable rate refresh (VRR / FreeSync / Adaptive-Sync) support for GNOME's Mutter compositor is closer to being merged. The native back-end support for VRR that has been in development the past two years is no longer considered a work-in-progress and it's believed there are no longer any blocking issues that would prevent this code from landing...

RT Patches Updated For Linux 5.19-rc1 - Real-Time Inches Closer To The Finish Line

Phoronix - Wed, 06/08/2022 - 02:15
The real-time (RT) patch series still hasn't been mainlined but the patch delta is slowly winding down with each new kernel version. Out today is the re-based RT patch series for the recently minted Linux 5.19-rc1 with some of the prior real-time patches having been upstreamed this merge window and other patches re-based to work with the newest kernel code...

Juju and Charmed Operators Accelerating FINOS Open Source Projects Adoption

The Linux Foundation - Wed, 06/08/2022 - 00:03

The article by Srikrishna ‘Kris’ Sharma with Canonical originally appeared in the FINOS Project’s Community Blog. It is another example of enterprises open sourcing their code so that they can “collectively solve common problems so they can separately innovate and differentiate on top of the common baseline.” Read more about Why Do Enterprises Use and Contribute to Open Source Software.

Orchestrating Legend with Juju

Goldman Sachs open sourced the code and contributed its internally developed Legend data management platform into FINOS in October 2020.  Legend provides an end-to-end data platform experience covering the full data lifecycle. It encompasses a suite of data management and governance components known as the Legend Platform. Legend enables breaking down silos and building a critical bridge over the historical divide between business and engineering, allowing companies to build data-driven applications and insightful business intelligence dashboards.

Accelerate FINOS Open Source Project Adoption

Ease and speed of deployment enables innovation and lowers the barrier of entry to open source consumption and contribution. Engineering experience is about leveraging software ops automation to demonstrate impact of an open source project to the community. An awesome engineering experience is more often required to enable wider adoption and contribution to an open source project.

Over the last few months, Canonical has been working closely with FINOS and its community members to offer a consistent way to deploy and manage enterprise applications using Juju and Charmed Operators with a focus on Day 2 operations. The idea is to provide a software ops automation framework and toolkit that enables the DevOps teams at financial institutions to realise the benefits of rapid deployment/ testing and application management using a platform that is 100% open source, vendor-agnostic and hybrid-multi-cloud ready.

What is Juju and Charmed Operator? Charmed Operator:

A charmed operator (also known, more simply, as a “charm”) encapsulates a single application and all the code and know-how it takes to operate it, such as how to combine and work with other related applications or how to upgrade it. Charms are programmed to understand a single application, its operations, and its potential to integrate with other applications. A charm defines and enables the channels by which applications connect. Hundreds of charms are available at charmhub.io.

Juju Operator Lifecycle Manager (OLM) is a hybrid-cloud application management and orchestration system for installation and day 2 operations. It helps deploy, configure, scale, integrate, maintain, and manage Kubernetes native, container-native and VM-native applications—and the relations between them.

Juju allows anyone to deploy and operate charmed operators (charms) in any cloud–including Kubernetes, VMs and Metal. Charms encapsulate the application plus deployment and operations knowledge into one single reusable artefact. Juju manages the lifecycle of applications and infrastructure stacks from cloud to the edge. Juju is cloud-vendor agnostic and hybrid-multi-cloud by nature: it can manage the lifecycle of applications in public clouds, private clouds, or on bare metal. Once bootstrapped, Juju will offer the same deployment and operations experience regardless of the cloud vendor.

The Legend Charm Bundle

In the spirit of providing an enterprise-grade automated deployment and maintenance experience to FINOS members, Canonical created a charmed bundle for Legend and contributed it to FINOS.

The Legend Charm Bundle provides a simple, efficient and enterprise-ready way to deploy and orchestrate a Legend instance in various environments across the CI/CD pipeline, from developer’s workstation to production environment. The bundle includes several Charmed Operators, one for each Legend component.

Why a Legend Charm Bundle?
  1. A simple way to evaluate Legend
    One can spin up a Legend environment from scratch using one single command juju deploy finos-legend-bundle
  2. An intuitive approach (for banks and other financial institutions) to spin up production environments
  3. Provides orchestration capabilities, not only deployment scripting
  4. Easily plugs into Legend release lifecycle and simplifies Legend FINOS instance maintenance

The Legend charm documentation resides on finos/legend-integration-juju github repository and here is the link to related repositories.multiple components.

Detailed instructions are available for local and cloud installations if you would like to spin up your own Legend instance within a few mins and start using Legend either locally or on AWS EKS.

The post Juju and Charmed Operators Accelerating FINOS Open Source Projects Adoption appeared first on Linux Foundation.

How Garbage Collection works inside a Java Virtual Machine

opensource.com - Tue, 06/07/2022 - 21:54
How Garbage Collection works inside a Java Virtual Machine Jayashree Hutt… Tue, 06/07/2022 - 09:54 Register or Login to like Register or Login to like

Automatic Garbage Collection (GC) is one of the most important features that makes Java so popular. This article explains why GC is essential. It includes automatic and generational GC, how the Java Virtual Machine (JVM) divides heap memory, and finally, how GC works inside the JVM.

Java memory allocation

Java memory is divided into four sections:

  1. Heap: The memory for object instances is allocated in the heap. When the object declaration is made, there won't be any memory allocated in the heap. Instead, a reference is created for that object in the stack.
  2. Stack: This section allocates the memory for methods, local variables, and class instance variables.
  3. Code: Bytecode resides in this section.
  4. Static: Static data and methods are placed in this section.
What is automatic Garbage Collection (GC)?

Automatic GC is a process in which the referenced and unreferenced objects in heap memory are identified, and then unreferenced objects are considered for deletion. The term referenced objects means some part of your program is using those objects. Unreferenced objects are not currently being used by the program.

Programming languages like C and C++ require manual allocation and deallocation of memory. This is automatically handled by GC in Java, although you can trigger GC manually with the system.gc(); call in your code.

The fundamental steps of GC are:

1. Mark used and unused objects

In this step, the used and unused objects are marked separately. This is a time-consuming process, as all objects in memory must be scanned to determine whether they're in use or not.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

2. Sweep/Delete objects

There are two variations of sweep and delete.

Simple deletion: Only unreferenced objects are removed. However, the memory allocation for new objects becomes difficult as the free space is scattered across available memory.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Deletion with compaction: Apart from deleting unreferenced objects, referenced objects are compacted. Memory allocation for new objects is relatively easy, and memory allocation performance is improved.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

More on Java What is enterprise Java programming? Red Hat build of OpenJDK Java cheat sheet Free online course: Developing cloud-native applications with microservices Fresh Java articles What is generational Garbage Collection (GC), and why is it needed?

As seen in the sweep and delete model, scanning all objects for memory reclamation from unused objects becomes difficult once the objects keep growing. An experimental study shows that most objects created during the program execution are short-lived.

The existence of short-lived objects can be used to improve the performance of GC. For that, the JVM divides the memory into different generations. Next, it categorizes the objects based on these memory generations and performs the GC accordingly. This approach is known as generational GC.

Heap memory generations and the generational Garbage Collection (GC) process

To improve the performance of the GC mark and sweep steps, the JVM divides the heap memory into three generations:

  • Young Generation
  • Old Generation
  • Permanent Generation
Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Here is a description of each generation and its key features.

Young Generation

All created objects are present here. The young generation is further divided into:

  1. Eden: All newly created objects are allocated with the memory here.
  2. Survivor space (S0 and S1): After surviving one GC, the live objects are moved to one of these survivor spaces.
Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

The generational GC that happens in the Young Generation is known as Minor GC. All Minor GC cycles are "Stop the World" events that cause the other applications to pause until it completes the GC cycle. This is why Minor GC cycles are faster.

To summarize: Eden space has all newly created objects. Once Eden is full, the first Minor GC cycle is triggered.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Minor GC: The live and dead objects are marked during this cycle. The live objects are moved to survivor space S0. Once all live objects are moved to S0, the unreferenced objects are deleted.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

The age of objects in S0 is 1 because they have survived one Minor GC. Now Eden and S1 are empty.

Once cleared, the Eden space is again filled with new live objects. As time elapses, some objects in Eden and S0 become dead (unreferenced), and Eden's space is full again, triggering the Minor GC.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

This time the dead and live objects in Eden and S0 are marked. The live objects from Eden are moved to S1 with an age increment of 1. The live objects from S0 are also moved to S1 with an age increment of 2 (because they've now survived two Minor GCs). At this point, S0 and Eden are empty. After every Minor GC, Eden and one of the survivor spaces are empty.

The same cycle of creating new objects in Eden continues. When the next Minor GC occurs, Eden and S1 are cleared by moving the aged objects to S0. The survivor spaces switch after every Minor GC.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

This process continues until the age of one of the surviving objects reaches a certain threshold, at which point it is moved to the so-called the Old Generation with a process called promotion.

Further, the -Xmn flag sets the Young Generation size.

Old Generation (Tenured Generation)

This generation contains the objects that have survived several Minor GCs and aged to reach an expected threshold.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

In the example diagram above, the threshold is 8. The GC in the Old Generation is known as a Major GC. Use the flags -Xms and -Xmx to set the initial and maximum size of the heap memory.

Permanent Generation

The Permanent Generation space stores metadata related to library classes and methods of an application, J2SE, and what's in use by the JVM itself. The JVM populates this data at runtime based on which classes and methods are in use. Once the JVM finds the unused classes, they are unloaded or collected, making space for used classes.

Use the flags -XX:PermGen and -XX:MaxPermGen to set the initial and maximum size of the Permanent Generation.

Metaspace

Metaspace was introduced in Java 8u and replaced PermGen. The advantage of this is automatic resizing, which avoids OutOfMemory errors.

Wrap up

This article discusses the various memory generations of JVM and how they are helpful for automatic generational Garbage Collection (GC). Understanding how Java handles memory isn't always necessary, but it can help you envision how the JVM deals with your variables and class instances. This understanding allows you to plan and troubleshoot your code and comprehend potential limitations inherent in a specific platform.

Understanding how Java handles memory isn't always necessary, but it can help you envision how the JVM deals with your variables and class instances.

Image by:

Pixabay. CC0.

Java What to read next Hard lessons learned about Kubernetes garbage collection This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Red Hat Enterprise Linux 9.0 Performing Well, Great Benefit To Newer Intel Xeon & AMD EPYC Servers

Phoronix - Tue, 06/07/2022 - 21:00
Last month RHEL 9.0 reached GA as the newest major update to Red Hat Enterprise Linux. Since then I've been trying out RHEL 9.0 on a few servers. To little surprise, especially for latest-generation Intel Xeon Scalable and AMD EPYC servers, RHEL 9.0 is offering significant uplift compared to the existing RHEL8 series. Here are osme Red Hat Enterprise Linux 9.0 benchmarks comparing the performance to RHEL 8.6.

Pages