opensource.com

Subscribe to opensource.com feed
Updated: 8 min 33 sec ago

Building the metaverse with open source

Tue, 06/14/2022 - 15:00
Building the metaverse with open source Liv Erickson Tue, 06/14/2022 - 03:00 1 reader likes this 1 reader likes this

The word metaverse has been thrown around a lot these days. Whether you believe it's a reality or not, the adoption of the term has signaled a significant shift in the way people think about the future of online interactions. With today's technological advancements and an increase in geographically distributed social circles, the idea of seamlessly connected virtual worlds as part of a metaverse has never felt more appealing.

Virtual worlds enable a wide range of scenarios, and brings to life a rich and vibrant array of experiences. Students can explore the past by stepping inside a past time period, embodying historic figures, and interacting with buildings that were built centuries ago. Coworkers can gather for coffee chats, regardless of where in the world they're working. Musicians and artists can interact with fans from around the world in small or large digital venues. Conferences can reach new audiences, and friends can connect to explore interactive spaces.

When we built virtual world platforms (the predecessors to today's metaverse applications) in the past, there was only limited access to powerful graphics hardware, scalable servers, and high-bandwidth network infrastructure. However, recent advancements in cloud computing and hardware optimization have allowed virtual worlds to reach new audiences. The complexity of what we're able to simulate has increased significantly.

Today, there are several companies investing in new online virtual worlds and technologies. To me, this is indicative of a fundamental shift in the way people interact with one another, create, and consume content online.

Some tenets associated with the concept of the metaverse and virtual worlds are familiar through the traditional web, including identity systems, communication protocols, social networks, and online economies. Other elements, though, are newer. The metaverse is already starting to see a proliferation of 3D environments (often created and shared by users), the use of digital bodies, or "avatars", and the incorporation of virtual and augmented reality technology.

Building virtual worlds the open source way

With this shift in computing paradigms, there's an opportunity to drive forward open standards and projects encouraging the development of decentralized, distributed, and interoperable virtual worlds. This can begin at the hardware level with projects like Razer's Open source virtual reality (OSVR) schematics encouraging experimentation for headset development, and go all the way up the stack. At the device layer, the Khronos Group's OpenXR standard has been widely adopted by headset manufacturers, which allows applications and engines to target a single API, with device-specific capabilities supported through extensions.

This allows creators and developers of virtual worlds to focus on mechanics and content. While the techniques used to build 3D experiences aren't new, the increased interest in metaverse applications has resulted in new tools and engines for creating immersive experiences. Although there are many libraries and engines that have differences in how they run their virtual worlds, most virtual worlds share the same underlying development concepts.

At the core of a virtual world is the 3D graphics and simulation engine (such as Babylon.js and the WebGL libraries it interacts with). This code is responsible for managing the game state of the world, so that interactions manipulating the state of the world are shared between the visitors of the space, and drawing updates to the environment on screen. Game simulation states can include objects in the world and avatar movement, so that when one user moves through a space, everyone else sees it happening in real time. The rendering engine uses the perspective of a virtual camera to draw a 2D image on the screen, mapped to what a user is looking at in digital space.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The video game world is made up of 2D and 3D objects that represent a virtual location. These experiences can vary, ranging from small rooms to entire planets, limited only by the creator's imagination. Inside of the virtual world, objects have transforms that instantiate the object to a particular place in the world's 3D coordinate system. The transform represents the object's position, rotation, and scale within the digital environment. These objects, which can have mesh geometry created in a 3D modeling program, materials, and textures assigned to them, can trigger other events in the world, play sounds, or interact with the user.

Once a virtual world has been created, the application renders content to the screen using a virtual camera. Like a camera in the real world, a camera inside of a game engine has a viewport and settings that change the way a frame is captured. For immersive experiences, the camera draws many updates every second (up to 120 frames per second for some high-end virtual reality headsets) to reflect the way you're moving within the space. Virtual reality experiences specifically also require that the camera draws twice: once for each eye, slightly offset by your interpupillary distance (the distance between the center of your pupils in each eye).

If camera rendering components of developing a virtual world sound complex, don't fret. Most libraries and frameworks for authoring immersive content have these capabilities available so you can focus on the content and interactivity. Open source game engines, such as Open 3D Engine (O3de) and Godot Engine offer these rendering capabilities and many other tools as built-in features. With open source engines, developers have the additional flexibility of extending or changing core systems, which allows for more control over the end experience.

Other key characteristics that make up the metaverse include users taking on digital bodies (often referred to as avatars), user-generated content that's created and shared by users of the platform, voice and text chat, and the ability to navigate between differently themed worlds and rooms.

Approaches to building the metaverse

Before choosing a development environment for building the metaverse, you should consider what tenets are most critical for the types of experiences and worlds your users are going to experience. The first choice you're faced with is whether to target a native experience or the browser. Both have different considerations for how a virtual world unfolds.

A proprietary metaverse necessarily offers limited connections to virtual worlds. Open source and browser-based platforms have emerged, building on top of web standards and operating through the Khronos group and W3C to ensure interoperability and content portability.

Web applications such as Mozilla Hubs and Element's Third Room build on existing web protocols to create open source options for building browser-based virtual world applications. These experiences, linking together 3D spaces embedded into web pages, utilize open source technologies including three.js, Babylon.js, and A-Frame for content authoring. They also utilize open source realtime communication protocols for voice and synchronized avatar movement.

Open access

As with all emerging technologies, it's critical to consider the use case and impact to the humans who use it. Immersive virtual and augmented reality devices have unprecedented capabilities to capture, process, store, and utilize data about an individual, including their physical movement patterns, cognitive state, and attention. Additionally, virtual worlds themselves significantly amplify the benefits and problems of today's social media, and require careful implementation of trust and safety systems, moderation techniques, and appropriate access permissions to ensure that users have a positive experience when they venture into these spaces.

As the web evolves and encompasses immersive content and spatial computing devices, it's important to think critically and carefully about the experiences being created, and interoperability across different applications. Ensuring that these virtual worlds are open, accessible, and safe to all is paramount. The prospect of the metaverse is an exciting one, and one that can only be realized through collaborative open source software movements.

Ensuring that virtual worlds are open, accessible, and safe to all is paramount to a successful metaverse.

Image by:

Image from Unsplash.com, Creative Commons Zero 

Gaming What to read next How to build an open source metaverse This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Share your Linux terminal with tmate

Tue, 06/14/2022 - 15:00
Share your Linux terminal with tmate Sumantro Mukherjee Tue, 06/14/2022 - 03:00 1 reader likes this 1 reader likes this

As a member of the Fedora Linux QA team, I sometimes find myself executing a bunch of commands that I want to broadcast to other developers. If you've ever used a terminal multiplexer like tmux or GNU Screen, you might think that that's a relatively easy task. But not all of the people I want to see my demonstration are connecting to my terminal session from a laptop or desktop. Some might have casually opened it from their phone browser—which they can readily do because I use tmate.

Linux terminal sharing with tmate

Watching someone else work in a Linux terminal is very educational. You can learn new commands, new workflows, or new ways to debug and automate. But it can be difficult to capture what you're seeing so you can try it yourself later. You might resort to taking screenshots or a screen recording of a shared terminal session so you can type out each command later. The only other option is for the person demonstrating the commands to record the session using a tool like Asciinema or script and scriptreplay.

But with tmate, a user can share a terminal either in read-only mode or over SSH. Both the SSH and the read-only session can be accessed through a terminal or as an HTML webpage.

I use read-only mode when I'm onboarding people for the Fedora QA team because I need to run commands and show the output, but with tmate, folks can keep notes by copying and pasting from their browser to a text editor.

Linux tmate in action

On Linux, you can install tmate with your package manager. For instance, on Fedora:

$ sudo dnf install tmate

On Debian and similar distributions:

$ sudo apt install tmate

On macOS, you can install it using Homebrew or MacPorts. If you need instructions for other Linux distributions, refer to the install guide.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Once installed, start tmate:

$ tmate

When tmate launches, links are generated to provide access to your terminal session over HTTP and SSH. Each protocol features a read-only option as well as a reverse SSH session.

Here's what a web session looks like:

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Tmate's web console is HTML5, so, as a result, a user can copy the entire screen and paste it into a terminal to run the same commands.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Keeping a session alive

You may wonder what happens if you accidentally close your terminal. You may also wonder about sharing your terminal with a different console application. After all, tmate is a multiplexer, so it should be able to keep sessions alive, detach and re-attach to a session, and so on.

And of course, that's exactly what tmate can do. If you've ever used tmux, this is probably pretty familiar.

$ tmate -F -n web new-session vi  console

This command opens up new-session in Vi, and the -F option ensures that the session re-spawns even when closed.

Image by:

(Sumantro Mukherjee, CC BY-SA 4.0)

Social multiplexing

Tmate gives you the freedom of tmux or GNU Screen plus the ability to share your sessions with others. It's a valuable tool for teaching other users how to use a terminal, demonstrating the function of a new command, or debugging unexpected behavior. It's open source, so give it a try!

Tmate expands your options for session sharing with the Linux terminal.

Image by:

iradaturrahmat via Pixabay, CC0

Linux What to read next How tmux sparks joy in your Linux terminal This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use Terraform to manage TrueNAS

Mon, 06/13/2022 - 15:00
Use Terraform to manage TrueNAS Alan Formy-Duval Mon, 06/13/2022 - 03:00 Register or Login to like Register or Login to like

Sometimes combining different open source projects can have benefits. The synergy of using Terraform with TrueNAS is a perfect example.

TrueNAS is an OpenBSD-based operating system that provides network-attached storage (NAS) and network services. One of its main strengths is leveraging the ZFS file system, which is known for enterprise-level reliability and fault tolerance. Terraform is a provisioning and deployment tool embodying the concept of infrastructure as code.

TrueNAS

TrueNAS has a very nice web user interface (UI) for its management and an application programming interface (API). Terraform can be integrated with the API to provide configuration management of your NAS, as I'll demonstrate below.

To begin, I used Virtual Machine Manager to configure a virtual machine and then installed the latest version, TrueNAS 13.0. The only necessary input was to enter the root password. Once it reboots, the main menu appears. You will also see the HTTP management address. You can access this address from your local web browser.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

Terraform

Terraform needs to be installed where it can access the TrueNAS management URL. I am taking advantage of tfenv, a tool for managing Terraform versions.

$ tfenv list-remote $ tfenv install 1.2.0 $ tfenv use 1.2.0 $ terraform -version Terraform v1.2.0 on linux_amd64

Next, create a working directory, such as ~/code/terraform/truenas, to contain the configuration files associated with your TrueNAS instance.

$ mkdir ~/code/terraform/truenas
$ cd ~/code/terraform/truenas

Create the initial terraform configuration file and add the necessary directives to define the TrueNAS provider.

$ vi main.tf

The provider will look like this, where the address and API key for your TrueNAS instance will need to be correctly specified.

$ cat main.tf


terraform {
  required_providers {
    truenas = {
      source = "dariusbakunas/truenas"
      version = "0.9.0"
    }
  }
}

provider "truenas" {
  api_key = "1-61pQpp3WyfYwg4dHToTHcOt7QQzVrMtZnkJAe9mmA0Z2w5MJsDB7Bng5ofZ3bbyn"
  base_url = "http://192.168.122.139/api/v2.0"
}

The TrueNAS API key is created in the Web UI. Log in and click the small gear in the upper right-hand corner.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

This UI section enables you to create the API key. Once generated, copy it to the main.tf file.

Initialize

In your TrueNAS Terraform directory, you have the main.tf file. The first step is to initialize using the command terraform init, which should generate the following result:

Initializing the backend...

Initializing provider plugins...
- Finding dariusbakunas/truenas versions matching "0.9.0"...
- Installing dariusbakunas/truenas v0.9.0...
- Installed dariusbakunas/truenas v0.9.0 (self-signed, key ID E44AF1CA58555E96)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

A successful initialization means you're ready to start adding resources. Any TrueNAS item, such as a storage pool, network file system (NFS) share, or cron job, is a resource.

Linux Containers What are Linux containers? What is Kubernetes? Free online course: Deploy containerized applications eBook: A guide to Kubernetes for SREs and sysadmins Free online course: Running containers with Red Hat technical overview eBook: Storage patterns for Kubernetes Add a ZFS dataset

The following example resource directive defines a ZFS dataset. For my example, I will add it to the main.tf file.

resource "truenas_dataset" "pictures" {
  pool = "storage-pool"
  name = "pictures"
  comments = "Terraform created dataset for Pictures"
 }

Run the command terraform validate to check the configuration.

Success! The configuration is valid.

Running terraform plan will describe the actions that Terraform will perform. Now, add the new dataset with terraform apply.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # truenas_dataset.pictures will be created
  + resource "truenas_dataset" "pictures" {
      + acl_mode             = (known after apply)
      + acl_type             = (known after apply)
      + atime                = (known after apply)
      + case_sensitivity     = (known after apply)
      + comments             = "Terraform created dataset for Pictures"
      + compression          = (known after apply)
      + copies               = (known after apply)
      + dataset_id           = (known after apply)
      + deduplication        = (known after apply)
      + encrypted            = (known after apply)
      + encryption_algorithm = (known after apply)
      + encryption_key       = (sensitive value)
      + exec                 = (known after apply)
      + generate_key         = (known after apply)
      + id                   = (known after apply)
      + managed_by           = (known after apply)
      + mount_point          = (known after apply)
      + name                 = "pictures"
      + pbkdf2iters          = (known after apply)
      + pool                 = "storage-pool"
      + quota_bytes          = (known after apply)
      + quota_critical       = (known after apply)
      + quota_warning        = (known after apply)
      + readonly             = (known after apply)
      + record_size          = (known after apply)
      + ref_quota_bytes      = (known after apply)
      + ref_quota_critical   = (known after apply)
      + ref_quota_warning    = (known after apply)
      + share_type           = (known after apply)
      + snap_dir             = (known after apply)
      + sync                 = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:

Type yes to confirm and hit Enter.

truenas_dataset.pictures: Creating...
truenas_dataset.pictures: Creation complete after 0s [id=storage-pool/pictures]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

That's it. You can check for this new dataset in the TrueNAS Web UI.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

Do more with TrueNAS and Terraform

The TrueNAS provider for Terraform allows you to manage many more aspects of your TrueNAS device. For instance, you could share this new dataset as an NFS or server message block (SMB) share. You can also create additional datasets, cron jobs, and zvols.

Get more out of TrueNAS when you integrate Terraform for configuration management.

Image by:

Opensource.com

Containers Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Handling maps in Groovy vs Java

Fri, 06/10/2022 - 15:00
Handling maps in Groovy vs Java Chris Hermansen Fri, 06/10/2022 - 03:00 Register or Login to like Register or Login to like

Java is a great programming language, but sometimes I want a Java-like language that's just a bit more flexible and compact. That's when I opt for Groovy.

In a recent article, I reviewed some of the differences between creating and initializing maps in Groovy and doing the same thing in Java. In brief, Groovy has a concise syntax for setting up maps and accessing map entries compared to the effort necessary in Java.

This article will delve into more differences in map handling between Groovy and Java. For that purpose, I will use the sample table of employees used for demoing the JavaScript DataTables library. To follow along, start by making sure you have recent versions of Groovy and Java installed on your computer.

Install Java and Groovy

Groovy is based on Java and requires a Java installation as well. A recent and/or decent version of Java and Groovy might already be in your Linux distribution's repositories, or you can download and install Groovy from the Apache Groovy website. A good option for Linux users is SDKMan, which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using SDK's releases of:

  • Java: version 11.0.12-open of OpenJDK 11
  • Groovy: version 3.0.8.
Back to the problem: maps

First, in my experience, maps and lists (or at least arrays) often end up in the same program. For example, processing an input file is very similar to passing over a list; often, I do that when I want to categorize data encountered in the input file (or list), storing some kind of value in lookup tables, which are just maps.

Second, Java 8 introduced the whole Streams functionality and lambdas (or anonymous functions). In my experience, converting input data (or lists) into maps often involves using Java Streams. Moreover, Java Streams are at their most flexible when dealing with streams of typed objects, providing grouping and accumulation facilities out of the box.

Employee list processing in Java

Here's a concrete example based on those fictitious employee records. Below is a Java program that defines an Employee class to hold the employee information, builds a list of Employee instances, and processes that list in a few different ways:

     1  import java.lang.*;
     2  import java.util.Arrays;
       
     3  import java.util.Locale;
     4  import java.time.format.DateTimeFormatter;
     5  import java.time.LocalDate;
     6  import java.time.format.DateTimeParseException;
     7  import java.text.NumberFormat;
     8  import java.text.ParseException;
       
     9  import java.util.stream.Collectors;
       
    10  public class Test31 {
       
    11      static public void main(String args[]) {
       
    12          var employeeList = Arrays.asList(
    13              new Employee("Tiger Nixon", "System Architect",
    14                  "Edinburgh", "5421", "2011/04/25", "$320,800"),
    15              new Employee("Garrett Winters", "Accountant",
    16                  "Tokyo", "8422", "2011/07/25", "$170,750"),
                                                        ...
   
    81              new Employee("Martena Mccray", "Post-Sales support",
    82                  "Edinburgh", "8240", "2011/03/09", "$324,050"),
    83              new Employee("Unity Butler", "Marketing Designer",
    84                  "San Francisco", "5384", "2009/12/09", "$85,675")
    85          );
       
    86          // calculate the average salary across the entire company
       
    87          var companyAvgSal = employeeList.
    88              stream().
    89              collect(Collectors.averagingDouble(Employee::getSalary));
    90          System.out.println("company avg salary = " + companyAvgSal);
       
    91          // calculate the average salary for each location,
    92          //     compare to the company average
       
    93          var locationAvgSal = employeeList.
    94              stream().
    95              collect(Collectors.groupingBy((Employee e) ->
    96                  e.getLocation(),
    97                      Collectors.averagingDouble(Employee::getSalary)));
    98          locationAvgSal.forEach((k,v) ->
    99              System.out.println(k + " avg salary = " + v +
   100                  "; diff from avg company salary = " +
   101                  (v - companyAvgSal)));
       
   102          // show the employees in Edinburgh approach #1
       
   103          System.out.print("employee(s) in Edinburgh (approach #1):");
   104          var employeesInEdinburgh = employeeList.
   105              stream().
   106              filter(e -> e.getLocation().equals("Edinburgh")).
   107              collect(Collectors.toList());
   108          employeesInEdinburgh.
   109              forEach(e ->
   110                  System.out.print(" " + e.getSurname() + "," +
   111                      e.getGivenName()));
   112          System.out.println();
       
       
   113          // group employees by location
       
   114          var employeesByLocation = employeeList.
   115              stream().
   116              collect(Collectors.groupingBy(Employee::getLocation));
       
   117          // show the employees in Edinburgh approach #2
       
   118          System.out.print("employee(s) in Edinburgh (approach #2):");
   119          employeesByLocation.get("Edinburgh").
   120              forEach(e ->
   121                  System.out.print(" " + e.getSurname() + "," +
   122                      e.getGivenName()));
   123          System.out.println();
       
   124      }
   125  }
       
   126  class Employee {
   127      private String surname;
   128      private String givenName;
   129      private String role;
   130      private String location;
   131      private int extension;
   132      private LocalDate hired;
   133      private double salary;
       
   134      public Employee(String fullName, String role, String location,
   135          String extension, String hired, String salary) {
   136          var nn = fullName.split(" ");
   137          if (nn.length > 1) {
   138              this.surname = nn[1];
   139              this.givenName = nn[0];
   140          } else {
   141              this.surname = nn[0];
   142              this.givenName = "";
   143          }
   144          this.role = role;
   145          this.location = location;
   146          try {
   147              this.extension = Integer.parseInt(extension);
   148          } catch (NumberFormatException nfe) {
   149              this.extension = 0;
   150          }
   151          try {
   152              this.hired = LocalDate.parse(hired,
   153                  DateTimeFormatter.ofPattern("yyyy/MM/dd"));
   154          } catch (DateTimeParseException dtpe) {
   155              this.hired = LocalDate.EPOCH;
   156          }
   157          try {
   158              this.salary = NumberFormat.getCurrencyInstance(Locale.US).
   159                  parse(salary).doubleValue();
   160          } catch (ParseException pe) {
   161              this.salary = 0d;
   162          }
   163      }
       
   164      public String getSurname() { return this.surname; }
   165      public String getGivenName() { return this.givenName; }
   166      public String getLocation() { return this.location; }
   167      public int getExtension() { return this.extension; }
   168      public LocalDate getHired() { return this.hired; }
   169      public double getSalary() { return this.salary; }
   170  }

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

Wow, that's a lot of code for a simple demo program! I'll go through it in chunks first.

Starting at the end, lines 126 through 170 define the Employee class used to store employee data. The most important thing to mention here is that the fields of the employee record are of different types, and in Java that generally leads to defining this type of class. You could make this code a bit more compact by using Project Lombok's @Data annotation to automatically generate the getters (and setters) for the Employee class. In more recent versions of Java, I can declare these sorts of things as a record rather than a class, since the whole point is to store data. Storing the data as a list of Employee instances facilitates the use of Java streams.

Lines 12 through 85 create the list of Employee instances, so now you've already dealt with 119 of 170 lines.

There are nine lines of import statements up front. Interestingly, there are no map-related imports! This is partly because I'm using stream methods that yield maps as their results, and partly because I'm using the var keyword to declare variables, so the type is inferred by the compiler.

The interesting parts of the above code happen in lines 86 through 123.

In lines 87-90, I convert employeeList into a stream (line 88) and then use collect() to apply the Collectors.averagingDouble() method to the Employee::getSalary (line 89) method to calculate the average salary across the whole company. This is pure functional list processing; no maps are involved.

In lines 93-101, I convert employeeList into a stream again. I then use the Collectors.groupingBy() method to create a map whose keys are employee locations, returned by e.getLocation(), and whose values are the average salary for each location, returned by Collectors.averagingDouble() again applied to the Employee::getSalary method applied to each employee in the location subset, rather than the entire company. That is, the groupingBy() method creates subsets by location, which are then averaged. Lines 98-101 use forEach() to step through the map entries printing location, average salary, and the difference between the location averages and company average.

Now, suppose you wanted to look at just those employees located in Edinburgh. One way to accomplish this is shown in lines 103-112, where I use the stream filter() method to create a list of only those employees based in Edinburgh and the forEach() method to print their names. No maps here, either.

Another way to solve this problem is shown in lines 113-123. In this method, I create a map where each entry holds a list of employees by location. First, in lines 113-116, I use the groupingBy() method to produce the map I want with keys of employee locations whose values are sublists of employees at that location. Then, in lines 117-123, I use the forEach() method to print out the sublist of names of employees at the Edinburgh location.

When we compile and run the above, the output is:

company avg salary = 292082.5
San Francisco avg salary = 284703.125; diff from avg company salary = -7379.375
New York avg salary = 410158.3333333333; diff from avg company salary = 118075.83333333331
Singapore avg salary = 357650.0; diff from avg company salary = 65567.5
Tokyo avg salary = 206087.5; diff from avg company salary = -85995.0
London avg salary = 322476.25; diff from avg company salary = 30393.75
Edinburgh avg salary = 261940.7142857143; diff from avg company salary = -30141.78571428571
Sydney avg salary = 90500.0; diff from avg company salary = -201582.5
employee(s) in Edinburgh (approach #1): Nixon,Tiger Kelly,Cedric Frost,Sonya Flynn,Quinn Rios,Dai Joyce,Gavin Mccray,Martena
employee(s) in Edinburgh (approach #2): Nixon,Tiger Kelly,Cedric Frost,Sonya Flynn,Quinn Rios,Dai Joyce,Gavin Mccray,MartenaEmployee list processing in Groovy

Groovy has always provided enhanced facilities for processing lists and maps, partly by extending the Java Collections library and partly by providing closures, which are somewhat like lambdas.

One outcome of this is that maps in Groovy can easily be used with different types of values. As a result, you can't be pushed into making the auxiliary Employee class; instead, you can just use a map. Let's examine a Groovy version of the same functionality:

     1  import java.util.Locale
     2  import java.time.format.DateTimeFormatter
     3  import java.time.LocalDate
     4  import java.time.format.DateTimeParseException
     5  import java.text.NumberFormat
     6  import java.text.ParseException
       
     7  def employeeList = [
     8      ["Tiger Nixon", "System Architect", "Edinburgh",
     9          "5421", "2011/04/25", "\$320,800"],
    10      ["Garrett Winters", "Accountant", "Tokyo",
    11          "8422", "2011/07/25", "\$170,750"],

                           ...

    76      ["Martena Mccray", "Post-Sales support", "Edinburgh",
    77          "8240", "2011/03/09", "\$324,050"],
    78      ["Unity Butler", "Marketing Designer", "San Francisco",
    79          "5384", "2009/12/09", "\$85,675"]
    80  ].collect { ef ->
    81      def surname, givenName, role, location, extension, hired, salary
    82      def nn = ef[0].split(" ")
    83      if (nn.length > 1) {
    84          surname = nn[1]
    85          givenName = nn[0]
    86      } else {
    87          surname = nn[0]
    88          givenName = ""
    89      }
    90      role = ef[1]
    91      location = ef[2]
    92      try {
    93          extension = Integer.parseInt(ef[3]);
    94      } catch (NumberFormatException nfe) {
    95          extension = 0;
    96      }
    97      try {
    98          hired = LocalDate.parse(ef[4],
    99              DateTimeFormatter.ofPattern("yyyy/MM/dd"));
   100      } catch (DateTimeParseException dtpe) {
   101          hired = LocalDate.EPOCH;
   102      }
   103      try {
   104          salary = NumberFormat.getCurrencyInstance(Locale.US).
   105              parse(ef[5]).doubleValue();
   106      } catch (ParseException pe) {
   107          salary = 0d;
   108      }
   109      [surname: surname, givenName: givenName, role: role,
   110          location: location, extension: extension, hired: hired, salary: salary]
   111  }
       
   112  // calculate the average salary across the entire company
       
   113  def companyAvgSal = employeeList.average { e -> e.salary }
   114  println "company avg salary = " + companyAvgSal
       
   115  // calculate the average salary for each location,
   116  //     compare to the company average
       
   117  def locationAvgSal = employeeList.groupBy { e ->
   118      e.location
   119  }.collectEntries { l, el ->
   120      [l, el.average { e -> e.salary }]
   121  }
   122  locationAvgSal.each { l, a ->
   123      println l + " avg salary = " + a +
   124          "; diff from avg company salary = " + (a - companyAvgSal)
   125  }
       
   126  // show the employees in Edinburgh approach #1
       
   127  print "employee(s) in Edinburgh (approach #1):"
   128  def employeesInEdinburgh = employeeList.findAll { e ->
   129      e.location == "Edinburgh"
   130  }
   131  employeesInEdinburgh.each { e ->
   132      print " " + e.surname + "," + e.givenName
   133  }
   134  println()
       
   135  // group employees by location
       
   136  def employeesByLocation = employeeList.groupBy { e ->
   137      e.location
   138  }
       
   139  // show the employees in Edinburgh approach #2
       
   140  print "employee(s) in Edinburgh (approach #1):"
   141  employeesByLocation["Edinburgh"].each { e ->
   142      print " " + e.surname + "," + e.givenName
   143  }
   144  println()

Because I am just writing a script here, I don't need to put the program body inside a method inside a class; Groovy handles that for us.

In lines 1-6, I still need to import the classes needed for the data parsing. Groovy imports quite a bit of useful stuff by default, including java.lang.* and java.util.*.

In lines 7-90, I use Groovy's syntactic support for lists as comma-separated values bracketed by [ and ]. In this case, there is a list of lists; each sublist is the employee data. Notice that you need the \ in front of the $ in the salary field. This is because a $ occurring inside a string surrounded by double quotes indicates the presence of a field whose value is to be interpolated into the string. An alternative would be to use single quotes.

But I don't want to work with a list of lists; I would rather have a list of maps analogous to the list of Employee class instances in the Java version. I use the Groovy Collection .collect() method in lines 90-111 to take apart each sublist of employee data and convert it into a map. The collect method takes a Groovy Closure argument, and the syntax for creating a closure surrounds the code with { and } and lists the parameters as a, b, c -> in a manner similar to Java's lambdas. Most of the code looks quite similar to the constructor method in the Java Employee class, except that there are items in the sublist rather than arguments to the constructor. However, the last two lines—

[surname: surname, givenName: givenName, role: role,

    location: location, extension: extension, hired: hired, salary: salary]

—create a map with keys surname, givenName, role, location, extension, hired, and salary. And, since this is the last line of the closure, the value returned to the caller is this map. No need for a return statement. No need to quote these key values; Groovy assumes they are strings. In fact, if they were variables, you would need to put them in parentheses to indicate the need to evaluate them. The value assigned to each key appears on its right side. Note that this is a map whose values are of different types: The first four are String, then int, LocalDate, and double. It would have been possible to define the sublists with elements of those different types, but I chose to take this approach because the data would often be read in as string values from a text file.

The interesting bits appear in lines 112-144. I've kept the same kind of processing steps as in the Java version.

In lines 112-114, I use the Groovy Collection average() method, which like collect() takes a Closure argument, here iterating over the list of employee maps and picking out the salary value. Note that using these methods on the Collection class means you don't have to learn how to transform lists, maps, or some other element to streams and then learn the stream methods to handle your calculations, as in Java. For those who like Java Streams, they are available in newer Groovy versions.

In lines 115-125, I calculate the average salary by location. First, in lines 117-119, I transform employeeList, which is a list of maps, into a map, using the Collection groupBy() method, whose keys are the location values and whose values are linked sublists of the employee maps pertaining to that location. Then I process those map entries with the collectEntries() method, using the average() method to compute the average salary for each location.

Note that collectEntries() passes each key (location) and value (employee sublist at that location) into the closure (the l, el -> string) and expects a two-element list of key (location) and value (average salary at that location) to be returned, converting those into map entries. Once I have the map of average salaries by location, locationAvgSal, I can print it out using the Collection each() method, which also takes a closure. When each() is applied to a map, it passes in the key (location) and value (average salary) in the same way as collectEntries().

In lines 126-134, I filter the employeeList to get a sublist of employeesInEdinburgh, using the findAll() method, which is analogous to the Java Streams filter() method. And again, I use the each() method to print out the sublist of employees in Edinburgh.

In lines 135-144, I take the alternative approach of grouping the employeeList into a map of employee sublists at each location, employeesByLocation. Then in lines 139-144, I select the employee sublist at Edinburgh, using the expression employeesByLocation[“Edinburgh”] and the each() method to print out the sublist of employee names at that location.

Why I often prefer Groovy

Maybe it's just my familiarity with Groovy, built up over the last 12 years or so, but I feel more comfortable with the Groovy approach to enhancing Collection with all these methods that take a closure as an argument, rather than the Java approach of converting the list, map, or whatever is at hand to a stream and then using streams, lambdas, and data classes to handle the processing steps. I seem to spend a lot more time with the Java equivalents before I get something working.

I'm also a huge fan of strong static typing and parameterized types, such as Map,employee> ,employee>as found in Java. However, on a day-to-day basis, I find that the more relaxed approach of lists and maps accommodating different types does a better job of supporting me in the real world of data without requiring a lot of extra code. Dynamic typing can definitely come back to bite the programmer. Still, even knowing that I can turn static type checking on in Groovy, I bet I haven't done so more than a handful of times. Maybe my appreciation for Groovy comes from my work, which usually involves bashing a bunch of data into shape and then analyzing it; I'm certainly not your average developer. So is Groovy really a more Pythonic Java? Food for thought.

I would love to see in both Java and Groovy a few more facilities like average()and averagingDouble(). Two-argument versions to produce weighted averages and statistical methods beyond averaging—like median, standard deviation, and so forth—would also be helpful. Tabnine offers interesting suggestions on implementing some of these.

Groovy resources

The Apache Groovy site has a lot of great documentation. Other good sources include the reference page for Groovy enhancements to the Java Collection class, the more tutorial-like introduction to working with collections, and Mr. Haki. The Baeldung site provides a lot of helpful how-tos in Java and Groovy. And a really great reason to learn Groovy is to learn Grails, a wonderfully productive full-stack web framework built on top of excellent components like Hibernate, Spring Boot, and Micronaut.

Discover the differences in map handling between Groovy and Java with this hands-on demo.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Programming Java What to read next Creating and initializing maps in Groovy vs Java This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Edit PDFs on Linux with these open source tools

Thu, 06/09/2022 - 15:00
Edit PDFs on Linux with these open source tools Michael Korotaev Thu, 06/09/2022 - 03:00 Register or Login to like Register or Login to like

Open source reading and editing tools for PDFs are often more secure and reliable alternatives to the applications residing in the first pages of "PDF editor" search results. There, you're likely to see proprietary applications with hidden limitations and tariffs, lacking sufficient information about data protection policies and hosting. You can have better.

Here are five applications that can be installed on your Linux system (and others) or hosted on a server. Each is free and open source, with all the necessary features for creating, editing, and annotating PDF files.

LibreOffice

With the LibreOffice suite, your choice of application depends on the initial task. While LibreOffice Writer, a word processor, lets you create PDF files with export from text formats like ODF and others, Draw is better for working with existing PDF files.

Draw is meant for creating and editing graphic documents, such as brochures, magazines, and posters. The toolset is therefore mainly focused on visual objects and layouts. For PDF editing, however, LibreOffice Draw offers tools for modifying and adding content in PDFs when the file has editing attributes. You can still add new text fields on the existing content layers and annotate or finish the documents if it doesn't.

Draw and Writer are both bundled in a LibreOffice desktop suite available for installation on Linux systems, macOS, and Windows.

ONLYOFFICE Docs

ONLYOFFICE has been improving work with PDFs for a while and introduced a brand new reader for PDFs and eBooks in version 7.1 of ONLYOFFICE Docs.

The document editor allows creating PDF files from scratch using DOCX as a base for files that can then be converted to PDF or PDF/A. With built-in form-creation functionality, ONLYOFFICE Docs also makes it possible to build fillable document templates and export them as editable PDFs with fillable fields for different types of content: text, images, dates, and more.

In addition to recognizing text within PDFs to copy and extract it, ONLYOFFICE Docs can convert PDFs to DOCX, which allows you to continue using the documents in fully editable text formats. ONLYOFFICE also lets you secure the files with passwords, add watermarks, and use digital signatures available in the desktop version.

ONLYOFFICE Docs can be used as a web suite (on-premises or in the cloud) integrated into a document management system (DMS) or as a standalone desktop application. You can install the latter as a DEB or RPM file, AppImage, Flatpack, and several other formats for Linux.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles PDF Arranger

PDF Arranger is a front-end application for the PikePDF library. It doesn't edit the content of a PDF the way LibreOffice and ONLYOFFICE do, but it's great for re-ordering pages, splitting a PDF into smaller documents, merging several PDFs into one, rotating or cropping pages, and so on. Its interface is intuitive and easy to use.

PDF Arranger is available for Linux and Windows.

Okular

Okular is a free open source viewer for documents developed by the KDE community. The app features very mature functionality and allows viewing PDFs, eBooks, images, and comics.

Okular has full or partial support for most popular PDF features and use cases, such as adding annotations and inline notes or inserting text boxes, shapes, and stamps. You can also add a digitally encrypted signature to your document so your readers can be sure of the document's source.

In addition to adding texts and images in PDFs, it's also possible to retrieve them from the document to copy and paste somewhere else. The Area Selection tool in Okular can identify the components within a selected area so you can extract them from the PDF independently of one another.

You can install Okular using your distribution's package manager or as a Flatpak.

Xournal++

Xournal++ is a handwriting journal software with annotation tools for PDF files.

Created to be notetaking software with enhanced handwriting features, it may not be the best option for working with text-based content and professional layouts. However, its ability to render graphics and support for stylus input in writing and drawing make it stand out as a niche productivity tool.

PDF annotation and sketching are made comfortable with layer management tools, customizable pen point settings, and support for stylus mappings. Xournal++ also has a text tool for adding text boxes and the ability to insert images.

Xournal++ has installation options for Linux systems (Ubuntu, Debian, Arch, SUSE), macOS, and Windows (10 and above).

Summary

If you're looking for a free and safe alternative to proprietary PDF viewing and editing software, it is not hard to find an open source option, whether for desktop or online use. Just keep in mind that the currently available solutions have their own advantages for different use cases, and there's no single tool that is equally great at all possible tasks.

These five solutions stand out for their functionality or usefulness for niche PDF tasks. For enterprise use and collaboration, I suggest ONLYOFFICE or LibreOffice Draw. PDF Arranger is a simple, lightweight tool for working with pages when you don't need to alter text. Okular offers great viewer features for multiple file types, and Xournal++ is the best choice if you want to sketch and take notes in your PDFs.

Open source alternatives to Adobe Acrobat have all the necessary features for creating, editing, and annotating PDFs.

Image by:

Opensource.com

Linux Alternatives What to read next Open source alternatives to Adobe Acrobat for PDFs This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to container orchestration with Kubernetes

Thu, 06/09/2022 - 15:00
A guide to container orchestration with Kubernetes Seth Kenlon Thu, 06/09/2022 - 03:00 Register or Login to like Register or Login to like

The term orchestration is relatively new to the IT industry, and it still has nuance that eludes or confuses people who don't spend all day orchestrating. When I describe orchestration to someone, it usually sounds like I'm just describing automation. That's not quite right. In fact, I wrote a whole article differentiating automation and orchestration.

An easy way to think about it is that orchestration is just a form of automation. To understand how you can benefit from orchestration, it helps to understand what specifically it automates.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Understanding containers

A container is an image of a file system containing only what's required to run a specific task. Most people don't build containers from scratch, although reading about how it's done can be elucidating. Instead, it's more common to pull an existing image from a public container hub.

A container engine is an application that runs a container. When a container is run, it's launched with a kernel mechanism called a cgroup, which keeps processes within the container separate from processes running outside the container.

Run a container

You can run a container on your own Linux computer easily with Podman, Docker, or LXC. They all use similar commands. I recommend Podman, as it's daemonless, meaning a process doesn't have to be running all the time for a container to launch. With Podman, your container engine runs only when necessary. Assuming you have a container engine installed, you can run a container just by referring to a container image you know to exist on a public container hub.

For instance, to run an Nginx web server:

$ podman run -p 8080:80 nginx
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
[...]

Open a separate terminal to test it using curl:

$ curl --no-progress-meter localhost:8080 | html2text
# Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
[nginx.org](http://nginx.org/).  
Commercial support is available at [nginx.com](http://nginx.com/).

_Thank you for using nginx._

As web server installs go, that's pretty easy.

Now imagine that the website you've just deployed gets an unexpected spike in traffic. You hadn't planned for that, and even though Nginx is a very resilient web server, everything has its limits. With enough simultaneous traffic, even Nginx can crash. Now what?

Sustaining containers

Containers are cheap. In other words, as you've just experienced, they're trivial to launch.

You can use systemd to make a container resilient, too, so that a container automatically relaunches even in the event of a crash. This is where using Podman comes in handy. Podman has a command to generate a systemd service file based on an existing container:

$ podman create --name mynginx -p 8080:80 nginx
$ podman generate systemd mynginx \
--restart-policy=always -t 5 -f -n

You can launch your container service as a regular user:

$ mkdir -p ~/.config/systemd/user
$ mv ./container-mynginx.service ~/.config/systemd/user/
$ systemctl enable --now --user container-mynginx.service
$ curl --head localhost:8080 | head -n1
HTTP/1.1 200 OKRun pods of containers

Because containers are cheap, you can readily launch more than one container to meet the demand for your service. With two (or more) containers offering the same service, you increase the likelihood that better distribution of labor will successfully manage incoming requests.

You can group containers together in pods, which Podman (as its name suggests) can create:

$ systemctl stop --user container-myngnix
$ podman run -dt --pod new:mypod -p 8080:80 nginx
$ podman pod ps
POD ID     NAME   STATUS  CREATED  INFRA ID  # OF CONTAINERS
26424cc... mypod  Running 22m ago  e25b3...   2

This can also be automated using systemd:

$ podman generate systemd mypod \
--restart-policy=always -t 5 -f -nClusters of pods and containers

It's probably clear that containers offer diverse options for how you deploy networked applications and services, especially when you use the right tools to manage them. Both Podman and systemd integrate with containers very effectively, and they can help ensure that your containers are available when they're needed.

But you don't really want to sit in front of your servers all day and all night just so you can manually add containers to pods any time the whole internet decides to pay you a visit. Even if you could do that, containers are only as robust as the computer they run on. Eventually, containers running on a single server do exhaust that server's bandwidth and memory.

The solution is a Kubernetes cluster: lots of servers, with one acting as a "control plane" where all configuration is entered and many, many others acting as compute nodes to ensure your containers have all the resources they need. Kubernetes is a big project, and there are many other projects, like Terraform, Helm, and Ansible, that interface with Kubernetes to make common tasks scriptable and easy. It's an important topic for all levels of systems administrators, architects, and developers.

To learn all about container orchestration with Kubernetes, download our free eBook: A guide to orchestration with Kubernetes. The guide teaches you how to set up a local virtual cluster, deploy an application, set up a graphical interface, understand the YAML files used to configure Kubernetes, and more.

To learn all about container orchestration with Kubernetes, download our new eBook.

Image by:

William Kenlon. CC BY-SA 4.0

Containers Kubernetes What to read next Experiment with containers and pods on your own computer This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I gave my old laptop new life with the Linux Xfce desktop

Wed, 06/08/2022 - 15:00
How I gave my old laptop new life with the Linux Xfce desktop Jim Hall Wed, 06/08/2022 - 03:00 1 reader likes this 1 reader likes this

A few weeks ago, I needed to give a conference presentation that included a brief demonstration of a small app I'd written for Linux. I needed a Linux laptop to bring to the conference, so I dug out an old laptop and installed Linux on it. I used the Fedora 36 Xfce spin, which worked great.

The laptop I used was purchased in 2012. The 1.70 GHz CPU, 4 GB memory, and 128 GB drive may seem small compared to my current desktop machine, but Linux and the Xfce desktop gave this old machine new life.

Xfce desktop for Linux

The Xfce desktop is a lightweight desktop that provides a sleek, modern look. The interface is familiar, with a taskbar or “panel” across the top to launch applications, change between virtual desktops, or access notifications in the system tray. The quick access dock at the bottom of the screen lets you launch frequently used applications like the terminal, file manager, and web browser.

Image by:

(Jim Hall, CC BY-SA 40)

To start a new application, click the Applications button in the upper-left corner. This opens a menu of application launchers, with the most frequently used applications like the terminal and file manager at the top. Other applications are organized into groups, so you can navigate to the one you want.

Image by:

(Jim Hall, CC BY-SA 40)

Managing files

Xfce's file manager is called Thunar, and it does a great job of organizing my files. I like that Thunar can also make connections to remote systems. At home, I use a Raspberry Pi using SSH as a personal file server. Thunar lets me open an SSH file transfer window so I can copy files between my laptop and the Raspberry Pi.

Image by:

(Jim Hall, CC BY-SA 40)

Another way to access files and folders is via the quick access dock at the bottom of the screen. Click the folder icon to bring up a menu of common actions such as opening a folder in a terminal window, creating a new folder, or navigating into a specific folder.

Image by:

(Jim Hall, CC BY-SA 40)

Other applications

I loved exploring the other applications provided in Xfce. The Mousepad text editor looks like a simple text editor, but it contains useful features for editing more than just plain text. Mousepad recognizes many file types that programmers and other power users may appreciate. Check out this partial list of programming languages available in the Document menu.

Image by:

(Jim Hall, CC BY-SA 40)

If you prefer a different look and feel, you can adjust the interface options such as font, color scheme, and line numbers using the View menu.

Image by:

(Jim Hall, CC BY-SA 40)

The disk utility lets you manage storage devices. While I didn't need to modify my system disk, the disk tool is a great way to initialize or reformat a USB flash drive. I found the interface very easy to use.

Image by:

(Jim Hall, CC BY-SA 40)

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

I was also impressed with the Geany integrated development environment. I was a bit surprised that a full IDE ran so well on an older system. Geany advertises itself as a “powerful, stable and lightweight programmer's text editor that provides tons of useful features without bogging down your workflow.” And that's exactly what Geany provided.

I started a simple “hello world” program to test out Geany, and was pleased to see that the IDE popped up syntax help as I typed each function name. The pop-up message is unobtrusive and provides just enough syntax information where I need it. While the printf function is easy for me to remember, I always forget the order of options to other functions like fputs and realloc. This is where I need the pop-up syntax help.

Image by:

(Jim Hall, CC BY-SA 40)

Explore the menus in Xfce to find other applications to make your work easier. You'll find apps to play music, access the terminal, or browse the web.

While I installed Linux to use my laptop for a few demos at a conference, I found Linux and the Xfce desktop made this old laptop feel quite snappy. The system performed so well that when the conference was over, I decided to keep the laptop around as a second machine.

I just love working in Xfce and using the apps. Despite the low overhead and minimal approach, I don't feel underpowered. I can do everything I need to do using Xfce and the included apps. If you have an older machine that needs new life, try installing Linux to bring new life to old hardware.

While I installed Linux to use my laptop for a few demos at a conference, I found Linux and the Xfce desktop made this old laptop feel quite snappy.

Image by:

Jonas Leupe on Unsplash

Linux Hardware What to read next 8 reasons to use the Xfce Linux desktop environment This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Using Ansible to automate software installation on my Mac

Wed, 06/08/2022 - 15:00
Using Ansible to automate software installation on my Mac Servesha Dudhgaonkar Wed, 06/08/2022 - 03:00 Register or Login to like Register or Login to like

On most systems, there are several ways to install software. Which one you use depends on the source of the application you're installing. Some software comes as a downloadable wizard to walk you through an install process, while others are files you just download and run immediately.

On macOS, a whole library of open source applications is available from Unix commands like Homebrew and MacPorts. The advantage of using commands for software installation is that you can automate them, and my favorite tool for automation is Ansible. Combining Ansible with Homebrew is an efficient and reproducible way to install your favorite open source applications.

This article demonstrates how to install one of my must-have writing tools, Asciidoctor, on macOS using Ansible. Asciidoctor is an open source text processor, meaning that it takes text written in a specific format (in this case, Asciidoc) and transforms it into other popular formats (such as HTML, PDF, and so on) for publishing. Ansible is an open source, agentless, and easy-to-understand automation tool. By using Ansible, you can simplify and automate your day-to-day tasks.

Note: While this example uses macOS, the information applies to all kinds of open source software on all platforms compatible with Ansible (including Linux, Windows, Mac, and BSD).

Installing Ansible

You can install Ansible using pip, the Python package manager. First, install pip:

$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ python ./get-pip.py

Next, install Ansible using pip:

$ sudo python -m pip install --user ansibleInstalling Ansible using Homebrew

Alternately, you can install Ansible using the Homebrew package manager. If you've already installed Ansible with pip, skip this step because you've already achieved the same result!

$ brew install ansible

More on Ansible A quickstart guide to Ansible Ansible cheat sheet Free online course: Ansible essentials Download and install Ansible eBook: The automated enterprise eBook: Ansible for DevOps Free Ansible eBooks Latest Ansible articles Configuring Ansible

To set up Ansible, you first must create an inventory file specifying which computer or computers you want your Ansible script (called a playbook) to operate on.

Create an inventory file in a terminal or using your favorite text editor. In a terminal, type the following, replacing your-host-name with the name of your computer:

$ cat << EOF >> inventory
[localhost\]
your-host-name
EOF

If you don't know your computer's hostname, you can get it using the hostname command. Alternately, go to the Apple menu, open System Preferences, then click Sharing. Your computer's hostname is beneath the computer name at the top of Sharing preference pane.

Installing Asciidoctor using Ansible

In this example, I'm only installing applications on the computer I'm working on, which is also known by the term localhost. To start, create a playbook.yml file and copy the following content:

- name: Install software
  hosts: localhost
  become: false
  vars:
    Brew_packages:
      - asciidoctor
    install_homebrew_if_missing: false

In the first YAML sequence, you name the playbook (Install software), provide the target (localhost), and confirm that administrative privileges are not required. You also create two variables that you can use later in the playbook: Brew_packages andinstall_homebrew_if_missing.

Next, create a YAML mapping called pre_tasks, containing the logic to ensure that Homebrew itself is installed on the computer where you're running the playbook. Normally, Ansible can verify whether an application is installed or not, but when that application is the package manager that helps Ansible make that determination in the first place, you have to do it manually: 

pre_tasks:
      - name: Ensuring Homebrew Is Installed
        stat:
          path: /usr/local/bin/brew
        register: homebrew_check

      - name: Fail If Homebrew Is Not Installed and install_homebrew_if_missing Is False
        fail:
          msg: Homebrew is missing, install from http://brew.sh
        when:
         - not homebrew_check.stat.exists
          - not install_homebrew_if_missing

      - name: Installing Homebrew
        shell: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
        when:
         - not homebrew_check.stat.exists
          - install_homebrew_if_missing

Finally, create a YAML mapping called tasks containing a call to the Homebrew module (it's a built-in module from Ansible) to install Asciidoctor in the event that it's not already present:

 tasks:
    - name: Install Asciidoctor
      homebrew:
        name: asciidoctor
        state: presentRunning an Ansible playbook

You run an Ansible playbook using the ansible-playbook command:

$ ansible-playbook -i inventory playbook.yml

The -i option specifies the inventory file you created when setting up Ansible. You can optionally add -vvvv to direct Ansible to be extra verbose when running the playbook, which can be useful when troubleshooting.

After the playbook has run, verify that Ansible has successfully installed Asciidoctor on your host:

$ asciidoctor -v
Asciidoctor X.Y.Z https://asciidoctor.org
 Runtime Environment (ruby 2.6.8p205 (2021-07-07 revision 67951)...Adapt for automation

You can add more software to the Brew_packages variable in this article's example playbook. As long as there's a Homebrew package available, Ansible installs it. Ansible only takes action when required, so you can leave all the packages you install in the playbook, effectively building a manifest of all the packages you have come to expect on your computer.

Should you find yourself on a different computer, perhaps because you're at work or you've purchased a new one, you can quickly install all the same applications in one go. Better still, should you switch to Linux, the Ansible playbook is still valid either by using Homebrew for Linux or by making a few simple updates to switch to a different package manager.

In this demo, I install one of my must-have writing tools, Asciidoctor, on macOS using Ansible.

Image by:

freephotocc via Pixabay CC0

Ansible Mac Automation What to read next Introduction to Homebrew: the painless way to install anything on a Mac This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Garbage Collection works inside a Java Virtual Machine

Tue, 06/07/2022 - 21:54
How Garbage Collection works inside a Java Virtual Machine Jayashree Hutt… Tue, 06/07/2022 - 09:54 Register or Login to like Register or Login to like

Automatic Garbage Collection (GC) is one of the most important features that makes Java so popular. This article explains why GC is essential. It includes automatic and generational GC, how the Java Virtual Machine (JVM) divides heap memory, and finally, how GC works inside the JVM.

Java memory allocation

Java memory is divided into four sections:

  1. Heap: The memory for object instances is allocated in the heap. When the object declaration is made, there won't be any memory allocated in the heap. Instead, a reference is created for that object in the stack.
  2. Stack: This section allocates the memory for methods, local variables, and class instance variables.
  3. Code: Bytecode resides in this section.
  4. Static: Static data and methods are placed in this section.
What is automatic Garbage Collection (GC)?

Automatic GC is a process in which the referenced and unreferenced objects in heap memory are identified, and then unreferenced objects are considered for deletion. The term referenced objects means some part of your program is using those objects. Unreferenced objects are not currently being used by the program.

Programming languages like C and C++ require manual allocation and deallocation of memory. This is automatically handled by GC in Java, although you can trigger GC manually with the system.gc(); call in your code.

The fundamental steps of GC are:

1. Mark used and unused objects

In this step, the used and unused objects are marked separately. This is a time-consuming process, as all objects in memory must be scanned to determine whether they're in use or not.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

2. Sweep/Delete objects

There are two variations of sweep and delete.

Simple deletion: Only unreferenced objects are removed. However, the memory allocation for new objects becomes difficult as the free space is scattered across available memory.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Deletion with compaction: Apart from deleting unreferenced objects, referenced objects are compacted. Memory allocation for new objects is relatively easy, and memory allocation performance is improved.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

More on Java What is enterprise Java programming? Red Hat build of OpenJDK Java cheat sheet Free online course: Developing cloud-native applications with microservices Fresh Java articles What is generational Garbage Collection (GC), and why is it needed?

As seen in the sweep and delete model, scanning all objects for memory reclamation from unused objects becomes difficult once the objects keep growing. An experimental study shows that most objects created during the program execution are short-lived.

The existence of short-lived objects can be used to improve the performance of GC. For that, the JVM divides the memory into different generations. Next, it categorizes the objects based on these memory generations and performs the GC accordingly. This approach is known as generational GC.

Heap memory generations and the generational Garbage Collection (GC) process

To improve the performance of the GC mark and sweep steps, the JVM divides the heap memory into three generations:

  • Young Generation
  • Old Generation
  • Permanent Generation
Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Here is a description of each generation and its key features.

Young Generation

All created objects are present here. The young generation is further divided into:

  1. Eden: All newly created objects are allocated with the memory here.
  2. Survivor space (S0 and S1): After surviving one GC, the live objects are moved to one of these survivor spaces.
Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

The generational GC that happens in the Young Generation is known as Minor GC. All Minor GC cycles are "Stop the World" events that cause the other applications to pause until it completes the GC cycle. This is why Minor GC cycles are faster.

To summarize: Eden space has all newly created objects. Once Eden is full, the first Minor GC cycle is triggered.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

Minor GC: The live and dead objects are marked during this cycle. The live objects are moved to survivor space S0. Once all live objects are moved to S0, the unreferenced objects are deleted.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

The age of objects in S0 is 1 because they have survived one Minor GC. Now Eden and S1 are empty.

Once cleared, the Eden space is again filled with new live objects. As time elapses, some objects in Eden and S0 become dead (unreferenced), and Eden's space is full again, triggering the Minor GC.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

This time the dead and live objects in Eden and S0 are marked. The live objects from Eden are moved to S1 with an age increment of 1. The live objects from S0 are also moved to S1 with an age increment of 2 (because they've now survived two Minor GCs). At this point, S0 and Eden are empty. After every Minor GC, Eden and one of the survivor spaces are empty.

The same cycle of creating new objects in Eden continues. When the next Minor GC occurs, Eden and S1 are cleared by moving the aged objects to S0. The survivor spaces switch after every Minor GC.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

This process continues until the age of one of the surviving objects reaches a certain threshold, at which point it is moved to the so-called the Old Generation with a process called promotion.

Further, the -Xmn flag sets the Young Generation size.

Old Generation (Tenured Generation)

This generation contains the objects that have survived several Minor GCs and aged to reach an expected threshold.

Image by:

(Jayashree Huttanagoudar, CC BY-SA 4.0)

In the example diagram above, the threshold is 8. The GC in the Old Generation is known as a Major GC. Use the flags -Xms and -Xmx to set the initial and maximum size of the heap memory.

Permanent Generation

The Permanent Generation space stores metadata related to library classes and methods of an application, J2SE, and what's in use by the JVM itself. The JVM populates this data at runtime based on which classes and methods are in use. Once the JVM finds the unused classes, they are unloaded or collected, making space for used classes.

Use the flags -XX:PermGen and -XX:MaxPermGen to set the initial and maximum size of the Permanent Generation.

Metaspace

Metaspace was introduced in Java 8u and replaced PermGen. The advantage of this is automatic resizing, which avoids OutOfMemory errors.

Wrap up

This article discusses the various memory generations of JVM and how they are helpful for automatic generational Garbage Collection (GC). Understanding how Java handles memory isn't always necessary, but it can help you envision how the JVM deals with your variables and class instances. This understanding allows you to plan and troubleshoot your code and comprehend potential limitations inherent in a specific platform.

Understanding how Java handles memory isn't always necessary, but it can help you envision how the JVM deals with your variables and class instances.

Image by:

Pixabay. CC0.

Java What to read next Hard lessons learned about Kubernetes garbage collection This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

6 Linux word processors you need to try

Mon, 06/06/2022 - 15:00
6 Linux word processors you need to try Don Watkins Mon, 06/06/2022 - 03:00 Register or Login to like Register or Login to like

Writers are always looking for better ways to put their words and ideas into readable formats to share with their readers. My first experiences with word processing came in my Apple II days when I used AppleWorks and later FrEDWriter, which was a free word processing application created in 1985. It was the standard for my students, many of whom came from households that lacked the money to purchase proprietary software.

Abiword

When I made the switch to Linux in the late 1990's, I was looking for high quality writing software that I could use and recommend to students who chose to follow my lead in the world of open source software. The first word processor I became familiar with was AbiWord. The name AbiWord is derived from the Spanish word, abierto, which means open. It was Initially released in 1998 and it has been under continuous development. It is licensed as GPLv2. It supports basic word processing such as lists, indents and character formats. It supports a variety of import and export file formats including .doc, .html, .docx, and .odt.

Image by:

(Don Watkins, CC BY-SA 4.0)

Etherpad

Etherpad is an open source group editing project. It allows you to edit documents in real time much like Google Drive. The main difference is that it is entirely open source. According to their website you can, "write articles, press releases, to-do lists, together with your friends, fellow students or colleagues, all working on the same document at the same time." The source code is readily available to look at. Etherpad is licensed as Apache 2.0. You can use Etherpad in the cloud or download and install it on your own Linux computer.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Cryptpad

CryptPad is a collaboration suite that is end-to-end encrypted. It is licensed with GPLv3 and its source code is available on GitHub. It was developed by Xwiki Labs. It is an alternative to Google Drive and is self hosted. According to their website, "CryptPad is built to enable collaboration. It synchronizes changes to documents in real time. Because all data is encrypted, the service and its administrators have no way of seeing the content being edited and stored.” Cryptpad offers extensive documentation for users.

Focuswriter

FocusWriter is a simple distraction free editor. It uses a hideaway interface that you access by moving your mouse to the edges of the screen. It is licensed with GPLv3 and it's available on Linux with Flatpak,via DEB on Ubuntu, and RPM on Fedora. This is an example of the FocusWriter desktop. A very simple and intuitive interface where the menu automatically hides until you move your mouse pointer to the top or sides of the screen. Files are saved by default as an .odt, but it also supports plain text, .docx, and Rich text.

Image by:

(Don Watkins, CC BY-SA 4.0)

LibreOffice Writer

LibreOffice Writer is my favorite. I have been using it for over a dozen years. It has all the features I need including formatting for rich text. It also has the largest array of import and export options I have seen in any word processor. There are dozens of templates available for specialty formats like APA for research and publication. I love that I can export directly to PDF and ‘epub' from any word processor. LibreOffice Writer is free software with the Mozilla Public License 2.0. The source code for LibreOffice is from the Document Foundation. LibreOffice comes standard with most Linux distribution. It is also available as Flatpak, Snap, and AppImage. In addition, you can download and install it on MacOS and Windows.

Image by:

(Don Watkins, CC BY-SA 4.0)

OpenOffice Writer

Apache OpenOffice Writer is a complete word processor. It's simple enough for memos yet complex enough to write your first book. According to their website, OpenOffice Writer automatically saves documents in ‘open document format'. Documents can also be saved in .doc, .docx, Rich Text, and other formats. OpenOffice Writer is licensed with an Apache License 2.0. Source code and is available on GitHub.

There is a wealth of free open source software waiting for you to discover. They are great for getting your everyday tasks done and you can also contribute to their development. What is your favorite Linux word processor application?

Check out one of my favorite open source word processors to put your ideas to paper.

Image by:

rawpixel.com. CC0.

Linux What to read next My favorite LibreOffice productivity tips Collaborate on text with Etherpad, an open source alternative to Google Docs This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A Drupal developer's guide to Progressive Web Apps

Mon, 06/06/2022 - 15:00
A Drupal developer's guide to Progressive Web Apps Alex Borsody Mon, 06/06/2022 - 03:00 Register or Login to like Register or Login to like

The following article is a companion to my presentation at Drupalcon and Drupalcamp covering Progressive Web Apps implementations.

Progressive Web Apps (PWA) have support from some of the top tech companies, including Google and Microsoft, with the common goal being "Web apps should be able to do anything iOS, Android, or desktop apps can." PWAs can add value to businesses at a variety of different stages. All projects have limitations, whether they be development resources, timeline, budget, or technical debt. Even with "unlimited resources," developing an app from a single codebase, using commonly known web technologies, allows for a more frictionless, sane release cycle.

Disclaimers:

  • PWA is a collection of different techniques combined in a web browser to create an "app-like" experience.
  • This information is from an architect's perspective for choosing and implementing various technologies that come together to build a product.
  • Below is a high-level end-to-end outline of a path to launch a Drupal website on the app stores. Each section could be its own in-depth blog post.
  • The techniques are written with Drupal in mind, but you can apply many of them to all web apps.
What is a PWA? Image by:

(Alex Borsody, CC BY-SA 4.0)

Benefits of a PWA implementation:

  • Increased Lighthouse score and SEO.
  • Single codebase.
  • Frictionless testing.
  • Instant feedback loop for development cycles.
  • Use of existing PaaS deployment workflows, including Acquia, Pantheon, Platform.sh etc.
  • Use of web technologies that are a familiar skillset for a wide array of developers.
  • Provides the only cross-platform development solution that delivers a full-fledged desktop experience.
  • Offers unlimited options to customize a design without relying on a cross-platform framework's limited UI components.

This article covers some basic points for PWA deployment. There are many details to consider both at the architect and developer levels. The following topics are discussed:

  • PWA minimum requirements and Drupal PWA module as a starting point.
  • Publishing on app stores.
  • Everything you need to know about making your PWA feel app-like.
PWA module on Drupal.org

The Drupal PWA module is a solution that generates a service worker for caching strategies and offline capabilities. Its secondary functionality also generates a manifest.json, so once installed, it will fulfill the basic requirements of a PWA out-of-the-box.

There is functionality in the module's service worker that provides unique solutions to Drupal-specific behavior, although you can also apply these solutions to apps outside of Drupal.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Offline caching

Offline caching with a service worker is one of the functionalities that defines a PWA.

The following images summarize how a service worker acts as a proxy (sitting between the client and the internet/webserver) to intercept HTTP requests from the browser.

During the first request to the /about page, the browser reaches the network, and upon returning a 200 response from the server, the Javascript service worker calls cache.put() to store the HTML and all assets in the Cache API.

Image by:

(Alex Borsody, CC BY-SA 4.0)

On the second trip, the service worker bypasses the network completely and serves the page from the Cache API store in the user's browser, loading the page instantly. It can also load the page offline.

Image by:

(Alex Borsody, CC BY-SA 4.0)

The browser can precache pages to make them load instantly before the user visits them or even load offline before a visit. However, because in Drupal, the CSS/JS filenames change after compression, the solution must address the problem of identifying these assets before it can precache them via a service worker. It does this by internally requesting the URLs set in the admin panel and extracting assets from the DOM. This allows the service worker install event to fetch all CSS/JS and images from these documents to store in Cache API. The complete pages will then be viewable offline and load instantly, even if the user never visits them first.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Image by:

(Alex Borsody, CC BY-SA 4.0)

Below, I fetch all the assets from the URLs set in the admin panel to inject later into the service worker precache assets array. In D8, I changed the request to use Drupal::httpClient(), which is the updated version of drupal_http_request() in D7 and is a wrapper for the PHP Guzzle library.

 foreach ($pages as $page) {
      try {
        // URL is validated as internal in ConfigurationForm.php.
        $url = Url::fromUserInput($page, ['absolute' => TRUE])->toString(TRUE);
        $url_string = $url->getGeneratedUrl();
        $response = \Drupal::httpClient()->get($url_string, array('headers' => array('Accept' => 'text/plain')));

This code matches all assets needed:

// Get all DOM data.
      $dom = new \DOMDocument();
      @$dom->loadHTML($data);

      $xpath = new \DOMXPath($dom);
      foreach ($xpath->query('//script[@src]') as $script) {
        $resources[] = $script->getAttribute('src');
      }
      foreach ($xpath->query('//link[@rel="stylesheet"][@href]') as $stylesheet) {
        $resources[] = $stylesheet->getAttribute('href');
      }
      foreach ($xpath->query('//style[@media="all" or @media="screen"]') as $stylesheets) {
        preg_match_all(
          "#(/(\S*?\.\S*?))(\s|\;|\)|\]|\[|\{|\}|,|\"|'|:|\<|$|\.\s)#ie",
          ' ' . $stylesheets->textContent,
          $matches
        );
        $resources = array_merge($resources, $matches[0]);
      }
      foreach ($xpath->query('//img[@src]') as $image) {
        $resources[] = $image->getAttribute('src');
      }
    }

Below, you can see the final result in the processed serviceworker.js file that is output in the browser. The variables in the service worker are replaced with the path to the assets to cache.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Phone home uninstall

The module provides another clever piece of functionality—responsible cleanup when uninstalled. The module sends a request back to a URL created by the module. If the URL does not exist, it means the module has been uninstalled. The service worker then unregisters itself and deletes all related caches left on the user's browser.

// Fetch phone-home URL and process response.
  let phoneHomeUrl = fetch(PWA_PHONE_HOME_URL)
  .then(function (response) {
    // if no network, don't try to phone-home.
    if (!navigator.onLine) {
      console.debug('PWA: Phone-home - Network not detected.');
    }

    // if network + 200, do nothing
    if (response.status === 200) {
      console.debug('PWA: Phone-home - Network detected, module detected.');
    }


    // if network + 404, uninstall
    if (response.status === 404) {
      console.debug('PWA: Phone-home - Network detected, module NOT detected. UNINSTALLING.');
// Let SW attempt to unregister itself.
      Promise.resolve(pwaUninstallServiceWorker());
    }

    return Promise.resolve();
  })
  .catch(function(error) {
    console.error('PWA: Phone-home - ', error);
  });
};Testing notes

Disable the module on dev as it provides an extra caching layer. Any changes pushed to production for CSS or other assets with cache first strategies should be followed by incrementing the service worker version to bust the cache.

You can find additional debugging steps for a service worker on this PWA module documentation page.

Using the Chrome console to remote debug on a mobile device is possible on Android and can be helpful.

2.x version

The 2.x and 7.2x versions port the service worker to Workbox, where you can set caching strategies. Here, setting caching strategies for different asset types and routes is simplified from about 30 lines of code using just the javascript Fetch API to about five lines. Some people may be resistant to libraries, but this is the direction Google is taking with PWAs.

Workbox caching strategies are similar to those in other caching layers such as Varnish. For example, by default, image assets and fonts are set to "cache first," so they are always served instantly. HTML would best be implemented as stale-while-revalidate.

Image by:

(Alex Borsody, CC BY-SA 4.0)

There is also functionality in Workbox, such as background sync, where a failed post request will retry upon coming back online.

Image by:

(Alex Borsody, CC BY-SA 4.0)

For more information on what a service worker can do and all the use cases where it may be helpful, check the W3 Service Workers Demo repo on GitHub.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Get your web app in the app stores

PWA builder is a web application powered by Microsoft where you input your URL and it generates everything you need to submit to the app stores.

For Android, it uses TWA, and for iOS, it wraps your web app in native SWIFT code using WebKit's WKWebView. These are techniques I have been using since 2013, way back when Drupal was a buzzy technology and being used by startups. Businesses that had mobile-optimized Drupal websites wanted them on the app stores. Before Android TWA, developers used Webview, and before WKWebView, there was UIWebView.

Recently PWA builder added a solution for iOS using WKWebView, which confirms my belief that this is the best option to get your PWA into the App Store. Maximilian Firtman also reveals this as the solution in his course "Creating Progressive Web Apps with Vue," which I purchased to see his answer to the problem.

The PWA module provides everything you need to run through PWA Builder:

  • For Android, it creates a lightweight .apk/.aap using TWA to submit to the Play Store 800kb.
  • For iOS, it wraps your website in WKWebView to submit to the App Store.

A live demo I put together of PWA builder is here. [[EDITORS - MISSING LINK]]

Android and TWA

The Google and Chromium teams are currently the strongest driving forces behind PWAs. Therefore, TWA is designed specifically to get your PWA into the Play Store. On the contrary, WKWebView is essentially a workaround not explicitly supported by Apple. However, WKWebView is extremely powerful, even though Apple doesn't advertise this or have much documentation on its capabilities.

Trusted Web Activity is essentially a Chrome process running in full screen with a status bar and loading screen. The thread is running in the same process as the Chrome app on your phone. For example, if you are logged in on your Chrome browser, you will be logged in on your TWA app. To clear up any possible confusion resulting from this, the TWA team has added a "toast," meaning the first time the user opens the app, a notification shows "Running in Chrome." This only happens the first time the app is installed. This annoyance is enough for some teams to ditch TWA and use the WebView class instead; however, Google discouraged this as you lose out on everything baked into the Chrome web browser.

The main points Google makes about using TWA are:

  • Chrome is feature complete.
  • Faster than Webview.
  • Evergreen (always the up-to-date version of Chrome).

Additional useful functionality.

  • Chrome handles frictionless OAuth requests.
  • Share cookies, local storage, and saved settings with the preferred browser.

Below is a comparison chart of everything you get when using TWA instead of a Webview wrapper.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Webkit: WKWebView

There are several considerations for publishing on the App Store. WKWebView is essentially a workaround and not a method implicitly endorsed by Apple to launch a native app. Some caveats come with this. The most important is to be mindful of Apple's minimal functionality guidelines.

From my experience, you will be approved if you do everything you can to make your web app "app-like" with useful functionality. Using the Webkit API to enhance your web app is another way to provide additional functionality outside of your website.

One technique is to set a cookie depending on the start_url. For example, add a parameter like myapp.com?ios_app and set a cookie to determine a separate stylesheet or customize logic.

Consider the following sample implementation.

Note: This technique should not be confused with Apple's limited add to homescreen support, which you usually hear about with Apple + PWAs. I won't cover this as it's not the experience a user would expect.

PWA builder provides the minimum functionality required to wrap a website in WKWebView for App Store submission. For features such as biometric or push notifications, you need a custom implementation of WKWebView.

In the graphic below, you can see the source files provided. You can then easily compile your app in XCode and submit it to the app store.

Image by:

(Alex Borsody, CC BY-SA 4.0)

PWA Builder provides:

  • No Bounce when scrolling out of view with wKWebView.scrollView.bounces = false
  • Service worker support
  • Shortcuts URL capture
  • Permitted navigation scopes
  • Status bar customization
  • Splash screen from manifest props
  • iOS app awareness from JS code
  • Mac Store support

A custom implementation of WKWebView can provide:

  • Push notifications: Push notifications are possible by posting the device ID matched to the Drupal UID, which can be extracted from the URL /user/{uid}/edit, for example.
  • Biometric: Biometric is implemented on all pages except for user/login and user/register, and the cookie max expiration is extended. Biometric is shown every time the app is closed and reopened.
  • WKUIDelegate: Present native UI elements, such as alerts, inputs, or contextual menus.
  • evaluateJavaScript(): Execute any Javascript. The possibilities here are endless.
  • Password management using associated domains: Placing a public keypair in your /.well-known directory will allow your native app to trust your website and autofill passwords.

View the README.md of WKWebView+, a project I am working on that makes it easy to integrate this enhanced functionality into any iOS PWA.

Cons of WKWebView

Give the following considerations attention before implementing WKWebView:

  • There is a paradigm shift in thinking required for frontend developers to debug a PWA correctly. Though it relies on web technologies, there is a learning curve.
  • Without a native iOS developer, certain features are not possible to implement. However, WKWebView+ was designed to solve this.
  • Though the outlook for Apple and PWAs looks positive, as usual, you are at the mercy of the next Safari release.
Moving forward

Many of the features available with TWA are only available on Chromium-based browsers. Webkit mobile/WKWebView lags. This lag includes push notifications, "add to home screen," and overall web browser standards. Maximilian Firtman's blog is currently one of the best resources for a summary of the updates in the latest Safari, even if they were not announced in the release notes.

The optimistic outlook is that WKWebView is based on the open-source project Webkit, and there is a collaboration among the developers that work on both Chromium and WebKit. Anyone can create an issue and pull request. Often, features already implemented in Chrome have patches submitted to Webkit that do the same.

Make it app-like

Websites that took all the right vitaminsA PWA is essentially a collection of web technologies that combine to make your web experience app-like, as if the website "took all the right vitamins." Below I have identified points that make up a good PWA:

  • UX/UI: Visual problem solving is at the core of making your website feel like an app. A great CSS developer with an eye for design and detail, such as animations, input/font sizes, and scrolling issues, is essential.
  • Stay current with app-like enhancements: Keeping frontend code updated and compatible across WebKit/Chrome requires research and periodic updates, particularly when a new version of the iPhone is released.
  • Implement expanded web capabilities: The Chromium team constantly improves the browser experience. This can be tracked in Project Fugu, the overarching web capabilities project. The closest thing there is to comprehensive documentation on PWAs is on webdev.
  • Page speed: I covered caching with a service worker, but there are countless other technologies and techniques.

Some examples of app-like enhancements include using HTML/CSS/JS technologies commonly available to web developers, and then making them frictionless to implement, test, and deploy. You can find a good example of a web application using many of these suggestions here.

Suggestions include:

  • Javascript touch events: Disable pinch zoom and add swipe/multitouch gestures.
  • CSS:
    • Minify/optimize CSS and apply Lighthouse suggestions.
    • "App-like" input/font sizes and make sure everything fits in the viewport; make it visually look like an app.
    • Tactful use of preloaders.
  • Utilize cookies: Set cookie based on app start URL.
  • HTML attributes:
  • Ajax API (Drupal specific), Websockets, or SPA framework.
  • iPhone specific suggestions:
Image by:

(Alex Borsody, CC BY-SA 4.0)

Wrap up

PWA brings together different techniques to create an app-like experience in a web browser. I outlined an approach to PWA implementation for a Drupal site, but other options are certainly available with similar designs. What implementations of PWA might help your organization's user experience?

View the README.md of WKWebView+, a project I am working on that makes it easy to integrate this enhanced functionality into any iOS PWA. 

Ionic the spiritual successor to Cordova is a popular framework that also utilizes WKWebView to build native iOS.

Here is an outlined approach to PWA implementation for a Drupal site, but other options are certainly available with similar designs.

Programming Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Attract contributors to your open source project with authenticity

Sat, 06/04/2022 - 15:00
Attract contributors to your open source project with authenticity Rizel Scarlett Sat, 06/04/2022 - 03:00 Register or Login to like Register or Login to like

It's not a secret that maintaining an open source project is often thankless and time-consuming work. However, I've learned that there's one shared joy among open source maintainers: They love building with a group of technologists who passionately believe in their vision.

Marketing feels cringey

Community support and teamwork are major incentives for open source maintainers. However, gaining community support and contributors is a challenge, especially as a new maintainer. The hope is that technologists will find our projects and start contributing by chance. The reality is we have to market our projects. Think about it: Developers create several public repositories daily, but nobody knows those repositories exist. Without adoption, community, or collaboration, we're not truly reaping the benefits of open source.

Although marketing an open source project is necessary for a project's overall success, developers are hesitant to do it because marketing to other developers often feels inauthentic and cringey. In this article, I explore methods maintainers can use to attract contributors in a genuine manner.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Promote your open source project

If you want people to contribute to your project, you have to tell them your project exists. So what can promotion look like for you? Instead of spamming discord channels or DMs about an open source project, maintainers can promote their projects through many channels, including:

  • Conference talks: People attend conferences to gain inspiration. Don't be afraid; they're not necessarily looking for a PhD-level lecture. Take the stage at an event like All Things Open, Open Source Series 101, Codeland, or Upstream to talk about what you're building, why you're building it, issues you face, and discoveries you have made. After your talk, people may want to learn more about what you're building and how they can get involved.
  • Blogging: Leverage popular developer blogging platforms such as Aviyel, Dev.to, or Hashnode to talk about your project. Add a link to your project within the blog posts so that the right people can find it. You can also submit an article to the editors here on Opensource.com to raise awareness about your open source project!
  • Twitter: Twitter has a large tech audience, including Developers, UX Designers, Developer Advocates, and InfoSec professionals who want to collaborate and learn from each other. Twitter is the perfect platform to post in an authentic, non-pushy way about your discoveries, new releases, and bug fixes. Folks will learn from you through your tweets and may feel inclined to build with you.
  • Podcasts or Twitter Spaces: Like conference talks, use podcasts and Twitter Spaces to build your project's brand. You don't have to talk about it in a marketing way. You can geek out with the host over your vision and the technical hiccups you've faced along the way.
  • Twitch Streams: Stream yourself live coding your project to create awareness of its existence and pair the program with your viewers. Eventually, they might tell other people about your product, or they might ask to contribute themselves.
  • Hacktoberfest: Hacktoberfest is a month-long event in October that encourages people to make their first contributions to projects. By participating in Hacktoberfest as a maintainer, you may recruit new contributors.
  • Sponsorships: Contributions don't always have to include code. Corporations and individuals can contribute by sponsoring you. Learn more about creating an appealing Sponsor profile here.
Gain community support

The proverb "it takes a village" applies to more than child-rearing. It also takes a village to maintain an open source project. Community is a large part of open source and just general life success. However, community support is a two-way street. To sustain community support, it's a best practice to give back to community members.

What can community support look like for you? As you promote your project, you will find folks willing to support you. To encourage them to continue supporting and appeal to other potential supporters, you can:

  • Highlight contributors/supporters: Once you start getting contributors, you can motivate more people to contribute to your project by highlighting past, current, or consistent contributors in your README. This acknowledgment shows that you value and support your contributors. Send your contributors swag or a portion of your sponsorship money if you can afford it. Folks will naturally gravitate to your projects if you're known for genuinely supporting your open source community.
Image by:

(Rizel Scarlett, CC BY-SA 4.0)

  • Establish a culture of kindness: Publish a Code of Conduct in your repository to ensure psychological safety for contributors. I strongly suggest you also adhere to those guidelines by responding kindly to people in comments, pull requests, and issues. It's also vital that you enforce your Code of Conduct. If someone in your community is not following the rules, make sure they face the outlined consequences without exception. Don't let a toxic actor ruin your project's environment with unkind language and harassment.
  • Provide a space for open discussion: Often, contributors join an open source community to befriend like-minded technologists, or they have a technical question, and you won't always be available to chat. Open source maintainers often use one of the following tools to create a place for contributors to engage with each other and ask questions in the open:
    • GitHub Discussions
    • Discord
    • Matrix.org
    • Mattermost
Create a "good" open source project

Good is subjective in code or art, but there are a few ways to indicate that your project is well thought out and a good investment. What does creating a good project look like for you? Your project doesn't have to include amazing code or be a life-changing project to indicate quality. Instead, ensure that your project has the following attributes.

Easy to find

To help other people find and contribute to your project, you can add topics to your repository related to your project's intended purpose, subject area, affinity groups, or other important qualities. When people go to github.com/topics to search for projects, your project has a higher chance of showing up.

Image by:

(Rizel Scarlett, CC BY-SA 4.0)

Easy to use

Make your project easy to use with a detailed README. It's the first thing new users and potential contributors see when visiting your project's repository. Your README should serve as a how-to guide for users. I suggest you include the following information in your README:

  • Project title
  • Project description
  • Installation instructions
  • Usage instructions
  • Link to your live web app
  • Links to related documentation (code of conduct, license, contributing guidelines)
  • Contributors highlights

You can learn more about crafting the perfect README here.

Easy to contribute to

Providing guidelines and managing issues help potential contributors understand opportunities to help.

  • Contributing guidelines - Similar to a README, contributors look for a markdown file called Contributing.md for insight on how to contribute to your project. Guidelines are helpful for you and the contributor because they won't have to ask you too many questions. The contributing guidelines should answer frequently asked questions. I suggest including the following information in your Contributing.md file:
    • Technologies used
    • How to report bugs
    • How to propose new features
    • How to open a pull request
    • How to claim an issue or task
    • Environment set up
    • Style guide/code conventions
    • Link to a discussion forum or how people can ask for help
    • Project architecture (nice to have)
    • Known issues
  • Good first issues - Highlight issues that don't need legacy project knowledge with the label good-first-issue, so new contributors can feel comfortable contributing to your project for the first time.
Image by:

(Rizel Scarlett, CC BY-SA 4.0)

Exercise persistence

Even if no one contributes to your project, keep it active with your contributions. Folks will be more interested in contributing to an active project. What does exercising persistence look like for your project? Even if no one is contributing, continue to build your project. If you can't think of new features to add and you feel like you fixed all the bugs, set up ways to make your project easy to manage and scale when you finally get a ton of contributors.

  • Scalability: Once you get contributors, it will get harder to balance responding to every issue. While you're waiting for more contributors, automate the tasks that will eventually become time-consuming. You can leverage GitHub Actions to handle the release process, CI/CD, or enable users to self-assign issues.
TL;DR

Attracting contributors to your open source project takes time, so be patient and don't give up on your vision. While you're waiting, promote your project by building in public and sharing your journey through blog posts, tweets, and Twitch streams. Once you start to gain contributors, show them gratitude in the form of acknowledgment, psychological safety, and support.

Next steps

For more information on maintaining an open source project, check out GitHub's Open Source Guide.

Check out these methods that open source maintainers can use to attract contributors in a genuine manner.

Image by:

Opensource.com

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How static linking works on Linux

Fri, 06/03/2022 - 15:00
How static linking works on Linux Jayashree Hutt… Fri, 06/03/2022 - 03:00 Register or Login to like Register or Login to like

Code for applications written using C usually has multiple source files, but ultimately you will need to compile them into a single executable.

You can do this in two ways: by creating a static library or a dynamic library (also called a shared library). These two types of libraries vary in terms of how they are created and linked. Your choice of which to use depends on your use case.

In a previous article, I demonstrated how to create a dynamically linked executable, which is the more commonly used method. In this article, I explain how to create a statically linked executable.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Using a linker with static libraries

A linker is a command that combines several pieces of a program together and reorganizes the memory allocation for them.

The functions of a linker include:

  • Integrating all the pieces of a program
  • Figuring out a new memory organization so that all the pieces fit together
  • Reviving addresses so that the program can run under the new memory organization
  • Resolving symbolic references

As a result of all these linker functionalities, a runnable program called an executable is created.

Static libraries are created by copying all necessary library modules used in a program into the final executable image. The linker links static libraries as a last step in the compilation process. An executable is created by resolving external references, combining the library routines with program code.

Create the object files

Here's an example of a static library, along with the linking process. First, create the header file mymath.h with these function signatures:

int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);

Create add.c, sub.c , mult.c and divi.c with these function definitions:

// add.c
int add(int a, int b){
return (a+b);
}

//sub.c
int sub(int a, int b){
return (a-b);
}

//mult.c
int mult(int a, int b){
return (a*b);
}

//divi.c
int divi(int a, int b){
return (a/b);
}

Now generate object files add.o, sub.o, mult.o, and divi.o using GCC:

$ gcc -c add.c sub.c mult.c divi.c

The -c option skips the linking step and creates only object files.

Create a static library called libmymath.a, then remove the object files, as they're no longer required. (Note that using a trash command is safer than rm.)

$ ar rs libmymath.a add.o sub.o mult.o divi.o
$ trash *.o
$ ls
add.c  divi.c  libmymath.a  mult.c  mymath.h  sub.c

You have now created a simple example math library called libmymath, which you can use in C code. There are, of course, very complex C libraries out there, and this is the process their developers use to generate the final product that you and I install for use in C code.

Next, use your math library in some custom code and then link it.

Create a statically linked application

Suppose you've written a command for mathematics. Create a file called mathDemo.c and paste this code into it:

#include
#include
#include

int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));

  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}

Notice that the first line is an include statement referencing, by name, your own libmymath library.

Create an object file called mathDemo.o for mathDemo.c:

$ gcc -I . -c mathDemo.c

The -I option tells GCC to search for header files listed after it. In this case, you're specifying the current directory, represented by a single dot (.).

Link mathDemo.o with libmymath.a to create the final executable. There are two ways to express this to GCC.

You can point to the files:

$ gcc -static -o mathDemo mathDemo.o libmymath.a

Alternately, you can specify the library path along with the library name:

$ gcc -static -o mathDemo -L . mathDemo.o -lmymath

In the latter example, the -lmymath option tells the linker to link the object files present in the libmymath.a with the object file mathDemo.o to create the final executable. The -L option directs the linker to look for libraries in the following argument (similar to what you would do with -I).

Analyzing the result

Confirm that it's statically linked using the file command:

$ file mathDemo
mathDemo: ELF 64-bit LSB executable, x86-64...
statically linked, with debug_info, not stripped

Using the ldd command, you can see that the executable is not dynamically linked:

$ ldd ./mathDemo
        not a dynamic executable

You can also check the size of the mathDemo executable:

$ du -h ./mathDemo
932K    ./mathDemo

In the example from my previous article, the dynamic executable took up just 24K.

Run the command to see it work:

$ ./mathDemo
Enter two numbers
10
5

10 + 5 = 15
10 - 5 = 5
10 * 5 = 50
10 / 5 = 2

Looks good!

When to use static linking

Dynamically linked executables are generally preferred over statically linked executables because dynamic linking keeps an application's components modular. Should a library receive a critical security update, it can be easily patched because it exists outside of the applications that use it.

When you use static linking, a library's code gets "hidden" within the executable you create, meaning the only way to patch it is to re-compile and re-release a new executable every time a library gets an update—and you have better things to do with your time, trust me.

However, static linking is a reasonable option if the code of a library exists either in the same code base as the executable using it or in specialized embedded devices that are expected to receive no updates.

Learn how to combine multiple C object files into a single executable with static libraries.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Programming Linux What to read next How dynamic linking for modular libraries works on Linux This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Get started with Cadence, an open source workflow engine

Thu, 06/02/2022 - 15:00
Get started with Cadence, an open source workflow engine Ben Slater Thu, 06/02/2022 - 03:00 Register or Login to like Register or Login to like

Modern applications require complicated interactions between long-running business processes, internal services, and third-party APIs. To say it's been a challenge for developers is putting it mildly. Managing these processes means tracking complex states, preparing responses to asynchronous events, and communicating with often unreliable external dependencies.

Developers typically take on these complex challenges with solutions that are just as convoluted, assembling unwieldy systems that leverage stateless services, databases, retry algorithms, and job scheduling queues. Because these complex systems obscure their own business logic, availability issues are common, often stemming from the application's dependence on scattered and unproven components. Developer productivity is regularly sacrificed to keep these sprawling, troubled systems from collapsing.

Designing a distributed application

Cadence solves these issues by offering a highly scalable fault-oblivious code platform. Cadence abstracts away the usual challenges of implementing fault tolerance and durability with its fault oblivious code.

A standard Cadence application includes a Cadence service, workflow, activity workers, and external clients. If needed, it's acceptable to co-locate the roles of workflow workers, activity workers, and external clients in a single application process.

Cadence Service

Image by:

(Ben Slater, CC BY-SA 4.0)

Cadence is centered on its multi-tenant service and the high scalability it enables. A strongly typed gRPC API exposes all Cadence service functionality. A Cadence cluster can run multiple services on multiple nodes, including:

  • Front end: A stateless service that handles incoming worker requests, with instances backed by an external load balancer.
  • History service: Handles core logic for workflow steps and activity orchestration.
  • Matching service: Matches workflow or activity tasks with workers ready to complete them.
  • Internal worker service: Meets internal requirements (such as archiving) by introducing Cadence workflows and activities.
  • Workers: Function as Cadence client apps that execute user-created workflow and activity logic.

By default, Cadence supports Apache Cassandra, MySQL, PostgreSQL, CockroachDB, and TiDB for use as persistence stores, as well as ElasticSearch and OpenSearch for listing workflows with complex predicates.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

Because the Cadence service is multi-tenant, a single service can serve one or many applications. A local Cadence service instance can be configured with docker-compose for local development. The Cadence service maintains workflow states, associated durable timers, and internal "task list" queues to send tasks to external workers.

Beyond the Cadence service itself:

  • Workflow workers: hosts fault-oblivious code externally to the Cadence service. The Cadence service sends these workers "decision tasks." The workers deliver the tasks to the workflow code and communicate the completed "decisions" back to the Cadence service. Workflow code can be implemented in any language able to communicate with Cadence API: production-ready Java and Go clients are currently available.

  • Activity workers: hosts "activities", or code that perform application specific actions such as service calls, database record updates, and file downloads. Activities feature task routing to specific processes, heartbeats, infinite retries, and unlimited execution time. The Cadence service sends activity tasks to these workers, who complete them and report completion.

  • External clients: enable the creation of workflow instances, or "executions". External clients such as UIs, microservices or CLIs use the StartWorkflowExecution Cadence service API call to implement executions. External clients are also capable of notifying workflows about asynchronous external events, synchronous workflow state queries, waiting for synchronous workflow completion, workflow restarts, cancellation, and searching for specific workflows with List API.

Getting started with Cadence

In this example we'll use the Cadence Java client. The client is available from GitHub, and JavaDoc documentation can be found here. You can also check for the latest release version.

To begin, add cadence-client as a dependency to your pom.xml file like this:

<dependency>

<groupId>com.uber.cadencegroupId>

<artifactId>cadence-clientartifactId>

<version>LATEST.RELEASE.VERSIONversion>

dependency>

Alternatively, you can use build.gradle:

compile group: ‘com.uber.cadence', name: ‘cadence-client', version: ‘LATEST.RELEASE.VERSION'

Java Hello World with Cadence

The best way to get an idea of what Cadence is capable of is to try it, so here's a simple "Hello World" example you can try. First, add the Cadence Java client dependency to your Java project. Using Gradle, the dependency looks like this:

compile group: ‘com.uber.cadence', name: ‘cadence-client', version: ‘'

Add these dependencies that the cadence-client requires as well:

compile group: ‘commons-configuration', name: ‘commons-configuration', version: ‘1.9'

compile group: ‘ch.qos.logback', name: ‘logback-classic', version: ‘1.2.3'

Then compile this code:

import com.uber.cadence.workflow.Workflow;
import com.uber.cadence.workflow.WorkflowMethod;
import org.slf4j.Logger;
public class GettingStarted {
private static Logger logger = Workflow.getLogger(GettingStarted.class);
public interface HelloWorld {
@WorkflowMethod
void sayHello(String name);
}
}

These Cadence Java samples are available to help if you encounter issues with the build files.

Next, put this logback config file into your classpath:

<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%npattern>
encoder>
appender>
<logger name="io.netty" level="INFO"/>
<root level="INFO">
<appender-ref ref="STDOUT" />
root>
configuration>

Now create the Hello World workflow. Add HelloWorldImpl with the sayHello method, which logs and returns "Hello …":

import com.uber.cadence.worker.Worker;
import com.uber.cadence.workflow.Workflow;
import com.uber.cadence.workflow.WorkflowMethod;
import org.slf4j.Logger;
public class GettingStarted {
private static Logger logger = Workflow.getLogger(GettingStarted.class);
public interface HelloWorld {
@WorkflowMethod
void sayHello(String name);
}
public static class HelloWorldImpl implements HelloWorld {
@Override
public void sayHello(String name) {
logger.info("Hello " + name + "!");
}
}
}

Register the workflow implementation to the Cadence framework with a worker connected to a Cadence service. Workers will connect to a Cadence service running locally by default.

public static void main(String[] args) {
WorkflowClient workflowClient =
WorkflowClient.newInstance(
new WorkflowServiceTChannel(ClientOptions.defaultInstance()),
WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());
// Get worker to poll the task list.
WorkerFactory factory = WorkerFactory.newInstance(workflowClient);
Worker worker = factory.newWorker(TASK_LIST);
worker.registerWorkflowImplementationTypes(HelloWorldImpl.class);
factory.start();
}

Now you're ready to run the worker program. Here's an example log:

13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0

13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"'}, identity=45937@maxim-C02XD0AAJGH6}

13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘null'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}

"Hello"'isn't printing, because the worker only hosts the workflow code. To execute the workflow, start it with the Cadence CLI:

$ docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \"World\"
Started Workflow Id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run Id: e7c40431-8e23-485b-9649-e8f161219efe

Now the program gives this output:

13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0

13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘Workflow Poller taskList="HelloWorldTaskList", domain=“test-domain”, type="workflow"'}, identity=45937@maxim-C02XD0AAJGH6}

13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘null'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}

13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!

Success! Now run this workflow execution:

$ docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \"Cadence\"

Started Workflow Id: d2083532-9c68-49ab-90e1-d960175377a7, run Id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e

You should get this output:

13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0

13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"'}, identity=45937@maxim-C02XD0AAJGH6}

13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=‘null'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}

13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!

13:42:34.994 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello Cadence!

Lastly, use this CLI to list the workflow:

$ docker run --network=host --rm ubercadence/cli:master --do test-domain workflow list

WORKFLOW TYPE | WORKFLOW ID | RUN ID | START TIME | EXECUTION TIME | END TIME

HelloWorld::sayHello | d2083532-9c68-49ab-90e1-d960175377a7 | 331bfa04-834b-45a7-861e-bcb9f6ddae3e | 20:42:34 | 20:42:34 | 20:42:35

HelloWorld::sayHello | bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 | e7c40431-8e23-485b-9649-e8f161219efe | 20:40:28 | 20:40:28 | 20:40:29

Look over the workflow execution history as well:

$ docker run --network=host --rm ubercadence/cli:master --do test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7
1 WorkflowExecutionStarted {WorkflowType:{Name:HelloWorld::sayHello},
TaskList:{Name:HelloWorldTaskList},
Input:["World"],
ExecutionStartToCloseTimeoutSeconds:3600,
TaskStartToCloseTimeoutSeconds:10,
ContinuedFailureDetails:[],
LastCompletionResult:[],
Identity:cadence-cli@linuxkit-025000000001,
Attempt:0,
FirstDecisionTaskBackoffSeconds:0}
2 DecisionTaskScheduled {TaskList:{Name:HelloWorldTaskList},
StartToCloseTimeoutSeconds:10,
Attempt:0}
3 DecisionTaskStarted {ScheduledEventId:2,
Identity:45937@maxim-C02XD0AAJGH6,
RequestId:481a14e5-67a4-436e-9a23-7f7fb7f87ef3}
4 DecisionTaskCompleted {ExecutionContext:[],
ScheduledEventId:2,
StartedEventId:3,
Identity:45937@maxim-C02XD0AAJGH6}
5 WorkflowExecutionCompleted {Result:[],
DecisionTaskCompletedEventId:4}

It may be a simple workflow, but looking at the history is quite informative. The history's value as a troubleshooting, analytics, and compliance tool only increases with the complexity of the workflow. As a best practice, automatically archive the history to a long-term blob store when workflows complete.

Try Cadence

Cadence offers transformative advantages for organizations and application development teams charged with creating and managing high-scale distributed applications built for high durability, availability, and scalability. Cadence is available to all as free and open source software, making it simple for teams to explore its capabilities and determine if Cadence is a strong fit for their organizations.

Using Cadence is as simple as cloning the Git repository for the Cadence server or the container image. For more details on getting started, visit: https://cadenceworkflow.io/docs/get-started/.

Cadence simplifies the complexity of distributed systems so that developers can focus on creating applications built for high durability, availability, and scalability.

Image by:

opensource.com

Programming DevOps Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The only Linux command you need to know

Thu, 06/02/2022 - 15:00
The only Linux command you need to know Seth Kenlon Thu, 06/02/2022 - 03:00 1 reader likes this 1 reader likes this

Information about Linux and open source abounds on the internet, but when you're entrenched in your work there's often a need for quick documentation. Since the early days of Unix, well before Linux even existed, there's been the man (short for "manual") and info commands, both of which display official project documentation about commands, configuration files, system calls, and more.

There's a debate over whether man and info pages are meant as helpful reminders for users who already know how to use a tool, or an intro for first time users. Either way, both man and info pages describe tools and how to use them, and rarely address specific tasks and how to accomplish them. It's for that very reason that the cheat command was developed.

For instance, suppose you can't remember how to unarchive a tar file. The man page provides you with all the options you require, but it leaves it up to you to translate this information into a functional command:

tar -A [OPTIONS] ARCHIVE ARCHIVE
tar -c [-f ARCHIVE] [OPTIONS] [FILE...]
tar -d [-f ARCHIVE] [OPTIONS] [FILE...]
tar -t [-f ARCHIVE] [OPTIONS] [MEMBER...]
tar -r [-f ARCHIVE] [OPTIONS] [FILE...]
tar -u [-f ARCHIVE] [OPTIONS] [FILE...]
tar -x [-f ARCHIVE] [OPTIONS] [MEMBER...]

That's exactly what some users need, but it confounds other users. The cheat sheet for tar, by contrast, provides complete common commands:

$ cheat tar

# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar

# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/

# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/

# To extract a .tgz or .tar.gz archive:
tar -xzvf /path/to/foo.tgz
tar -xzvf /path/to/foo.tar.gz
[...]

It's exactly what you need, when you need it.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles The Linux cheat command

The cheat command is a utility to search for and display a list of example tasks you might do with a Linux command. As with many Unix commands, there are different implementations of the same concept, including one written in Go and one, which I help maintain, written in just 100 lines of Bash.

To install the Go version, download the latest release and put it somewhere in your path, such as ~/.local/bin/ or /usr/local/bin. To install the Bash version, download the latest release and run the install-cheat.sh script:

$ sh ./install-cheat.sh

Or to configure the installation, use Autotools:

$ aclocal ; autoconf
$ automake --add-missing ; autoreconf
$ ./configure --prefix=$HOME/.local
$ make
$ make installGet cheat sheets for your Linux terminal

Cheat sheets are just plain text files containing common commands. The main collection of cheat sheets is available at Github.com/cheat/cheatsheets. The Go version of cheat downloads cheatsheets for you when you first run the command. If you're using the Bash version of cheat, the --fetch option downloads cheatsheets for you:

$ cheat --fetch

As with man pages, you can have multiple collections of cheat sheets on your system. The Go version of cheat uses a YAML config file to define where each collection is located. The Bash version defines the path during the install, and by default downloads the Github.com/cheat/cheatsheets collection as well as Opensource.com's own Gitlab.com/opensource.com/cheatsheets collection.

List cheat sheets

To list the cheat sheets on your system, use the --list option:

$ cheat --list
7z
ab
acl
alias
ansi
ansible
ansible-galaxy
ansible-vault
apk
[...]View a Linux cheat sheet

Viewing a cheat sheet is as easy as viewing a man or info page. Just provide the name of the command you need help with:

$ cheat alias

# To show a list of your current shell aliases:
alias

# To alias `ls -l` to `ll`:
alias ll='ls -l'

By default, the cheat command uses your environment's pager. Your pager is set with the PAGER environment variable. You can override that temporarily by redefining the PAGER variable before running the cheat command:

$ PAGER=most cheat less

If you just want to cat the cheat sheet into your terminal without a pager, the Bash version has a --cat option for convenience:

$ cheat --cat lessIt's not actually cheating

The cheat system cuts to the chase. You don't have to piece together clues about how to use a command. You just follow the examples. Of course, for complex commands, it's not a shortcut for a thorough study of the actual documentation, but for quick reference, it's as fast as it gets.

You can even create your own cheat sheet just by placing a file in one of the cheat sheet collections. Good news! Because the projects are open source, you can contribute your personal cheat sheets to the GitHub collection. And more good news! When there's a new Opensource.com cheat sheet release, we'll include a plain text version from now on so you can add that to your collection.

The command is called cheat, but as any Linux user will assure you, it's not actually cheating. It's working smarter, the open source way.

The Linux cheat command is a utility to search for and display a list of example tasks you might do with a command.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A visual guide to Kubernetes networking fundamentals

Wed, 06/01/2022 - 15:00
A visual guide to Kubernetes networking fundamentals Nived Velayudhan Wed, 06/01/2022 - 03:00 Register or Login to like Register or Login to like

Moving from physical networks using switches, routers, and ethernet cables to virtual networks using software-defined networks (SDN) and virtual interfaces involves a slight learning curve. Of course, the principles remain the same, but there are different specifications and best practices. Kubernetes has its own set of rules, and if you're dealing with containers and the cloud, it helps to understand how Kubernetes networking works.

The Kubernetes Network Model has a few general rules to keep in mind:

  1. Every Pod gets its own IP address: There should be no need to create links between Pods and no need to map container ports to host ports.
  2. NAT is not required: Pods on a node should be able to communicate with all Pods on all nodes without NAT.
  3. Agents get all-access passes: Agents on a node (system daemons, Kubelet) can communicate with all the Pods in that node.
  4. Shared namespaces: Containers within a Pod share a network namespace (IP and MAC address), so they can communicate with each other using the loopback address.
What Kubernetes networking solves

Kubernetes networking is designed to ensure that the different entity types within Kubernetes can communicate. The layout of a Kubernetes infrastructure has, by design, a lot of separation. Namespaces, containers, and Pods are meant to keep components distinct from one another, so a highly structured plan for communication is important.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

Container-to-container networking

Container-to-container networking happens through the Pod network namespace. Network namespaces allow you to have separate network interfaces and routing tables that are isolated from the rest of the system and operate independently. Every Pod has its own network namespace, and containers inside that Pod share the same IP address and ports. All communication between these containers happens through localhost, as they are all part of the same namespace. (Represented by the green line in the diagram.)

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Pod-to-Pod networking

With Kubernetes, every node has a designated CIDR range of IPs for Pods. This ensures that every Pod receives a unique IP address that other Pods in the cluster can see. When a new Pod is created, the IP addresses never overlap. Unlike container-to-container networking, Pod-to-Pod communication happens using real IPs, whether you deploy the Pod on the same node or a different node in the cluster.

The diagram shows that for Pods to communicate with each other, the traffic must flow between the Pod network namespace and the Root network namespace. This is achieved by connecting both the Pod namespace and the Root namespace by a virtual ethernet device or a veth pair (veth0 to Pod namespace 1 and veth1 to Pod namespace 2 in the diagram). A virtual network bridge connects these virtual interfaces, allowing traffic to flow between them using the Address Resolution Protocol (ARP).

When data is sent from Pod 1 to Pod 2, the flow of events is:

  1. Pod 1 traffic flows through eth0 to the Root network namespace's virtual interface veth0.
  2. Traffic then goes through veth0 to the virtual bridge, which is connected to veth1.
  3. Traffic goes through the virtual bridge to veth1.
  4. Finally, traffic reaches the eth0 interface of Pod 2 through veth1.
Pod-to-Service networking

Pods are very dynamic. They may need to scale up or down based on demand. They may be created again in case of an application crash or a node failure. These events cause a Pod's IP address to change, which would make networking a challenge.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

Kubernetes solves this problem by using the Service function, which does the following:

  1. Assigns a static virtual IP address in the frontend to connect any backend Pods associated with the Service.
  2. Load-balances any traffic addressed to this virtual IP to the set of backend Pods.
  3. Keeps track of the IP address of a Pod, such that even if the Pod IP address changes, the clients don't have any trouble connecting to the Pod because they only directly connect with the static virtual IP address of the Service itself.

The in-cluster load balancing occurs in two ways:

  1. IPTABLES: In this mode, kube-proxy watches for changes in the API Server. For each new Service, it installs iptables rules, which capture traffic to the Service's clusterIP and port, then redirects traffic to the backend Pod for the Service. The Pod is selected randomly. This mode is reliable and has a lower system overhead because Linux Netfilter handles traffic without the need to switch between userspace and kernel space.
  2. IPVS: IPVS is built on top of Netfilter and implements transport-layer load balancing. IPVS uses the Netfilter hook function, using the hash table as the underlying data structure, and works in the kernel space. This means that kube-proxy in IPVS mode redirects traffic with lower latency, higher throughput, and better performance than kube-proxy in iptables mode.

The diagram above shows the package flow from Pod 1 to Pod 3 through a Service to a different node (marked in red). The package traveling to the virtual bridge would have to use the default route (eth0) as ARP running on the bridge wouldn't understand the Service. Later, the packages have to be filtered by iptables, which uses the rules defined in the node by kube-proxy. Therefore the diagram shows the path as it is.

Internet-to-Service networking

So far, I have discussed how traffic is routed within a cluster. There's another side to Kubernetes networking, though, and that's exposing an application to the external network.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

You can expose an application to an external network in two different ways.

  1. Egress: Use this when you want to route traffic from your Kubernetes Service out to the Internet. In this case, iptables performs the source NAT, so the traffic appears to be coming from the node and not the Pod.
  2. Ingress: This is the incoming traffic from the external world to Services. Ingress also allows and blocks particular communications with Services using rules for connections. Typically, there are two ingress solutions that function on different network stack regions: the service load balancer and the ingress controller.
Discovering Services

There are two ways Kubernetes discovers a Service:

  1. Environment Variables: The kubelet service running on the node where your Pod runs is responsible for setting up environment variables for each active service in the format {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT. You must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
  2. DNS: The DNS service is implemented as a Kubernetes service that maps to one or more DNS server Pods, which are scheduled just like any other Pod. Pods in the cluster are configured to use the DNS service, with a DNS search list that includes the Pod's own namespace and the cluster's default domain. A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, all Pods can automatically resolve Services by their DNS name. The Kubernetes DNS server is the only way to access ExternalName Services.
ServiceTypes for publishing Services:

Kubernetes Services provide you with a way of accessing a group of Pods, usually defined by using a label selector. This could be applications trying to access other applications within the cluster, or it could allow you to expose an application running in the cluster to the external world. Kubernetes ServiceTypes enable you to specify what kind of Service you want.

Image by:

(Ahmet Alp Balkan, CC BY-SA 4.0)

The different ServiceTypes are:

  1. ClusterIP: This is the default ServiceType. It makes the Service only reachable from within the cluster and allows applications within the cluster to communicate with each other. There is no external access.
  2. LoadBalancer: This ServiceType exposes the Services externally using the cloud provider's load balancer. Traffic from the external load balancer is directed to the backend Pods. The cloud provider decides how it is load-balanced.
  3. NodePort: This allows the external traffic to access the Service by opening a specific port on all the nodes. Any traffic sent to this Port is then forwarded to the Service.
  4. ExternalName: This type of Service maps a Service to a DNS name by using the contents of the externalName field by returning a CNAME record with its value. No proxying of any kind is set up.
Networking software

Networking within Kubernetes isn't so different from networking in the physical world, as long as you understand the technologies used. Study up, remember networking basics, and you'll have no trouble enabling communication between containers, Pods, and Services.

Networking within Kubernetes isn't so different from networking in the physical world. Remember networking basics, and you'll have no trouble enabling communication between containers, Pods, and Services.

Image by:

Opensource.com

Kubernetes Containers What to read next A visual map of a Kubernetes deployment This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux desktops: KDE vs GNOME

Wed, 06/01/2022 - 15:00
Linux desktops: KDE vs GNOME Seth Kenlon Wed, 06/01/2022 - 03:00 1 reader likes this 1 reader likes this

I'm an ardent KDE Plasma Desktop user, but at work I happily use GNOME. Without getting into the question of which desktop I'd take to a desert island (that happens to have a power outlet), I see the merits of both desktops, and I'd rather use either of them than non-open source desktop alternatives.

I've tried the proprietary alternatives, and believe me, they're not fun (it took one over a decade to get virtual workspaces, and the other still doesn't have a screenshot function built in). And for all the collaboration that the KDE and GNOME developers do these days at conferences like GUADEC, there's still a great philosophical divide between the two.

And you know what? That's a good thing.

Missing the tree for the forest

As a KDE user, I'm used to options. When I right-click on an object, whether it's a file, a widget, or even the empty space between widgets, I expect to see at least 10 options for what I'd like to do or how I'd like to configure the object. I like that because I like to configure my environment. I see that as the "power" part of being a "power user." I want to be able to adapt my environment to my whims to make it work better for me, even when the way I work is utterly unique and maybe not even sensible.

GNOME doesn't give the user dozens of options with every right-click. In fact, GNOME doesn't even give you that many options when you go to Settings. To get configuration options, you have to download a tool called Tweaks, and for some you must install extensions.

I'm not a GNOME developer, but I've set up a lot of Linux computers for friends and colleagues, and one thing I've noticed is that everybody has a unique perception of interface design. Some people, myself included, enjoy seeing a multitude of choices readily available at every turn.

Other people don't.

Here's what I see when I right-click on a file in the KDE Plasma Desktop:

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Here's what I see when I right-click on a file in the GNOME desktop:

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Including submenus, my Plasma Desktop has over 30 choices in a right-click. Of course, that's partly because I've configured it that way, and context matters, too. I have more options in a Git repository, for instance, than outside of one. By contrast, GNOME has 11 options in a right-click.

Bottom line: Some users aren't keen to mentally filter out 29 different options so they can see the one option they're looking for. Minimalism allows users to focus on essential and common actions. Having only the essential options can be comforting for new users, a mental relief for the experienced user, and efficient for all users.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Mistake vectors

As a Linux "power user," I fall prey to the old adage that I'm responsible for my own errors. It's the stuff of legend that Linux gives you access to "dangerous" commands and that, should you choose to use them, you're implicitly forgoing your right to complain about the results. For the record, I've never agreed with this sentiment, and I've written and promoted tools that help avoid mistakes in the terminal.

The problem is that mistakes are not planned. If you could plan your mistakes, you could choose not to make them. What actually happens is that mistakes occur when you haven't planned them, usually at the worst possible moment.

One way to reduce error is to reduce choice. When you have only two buttons to press, you can make only one mistake. It's also easier to identify what mistake you've made when there are fewer avenues to take. When you have five buttons, not only can you make four mistakes, but you also might not recall which button out of the five was the wrong one (and the other wrong one, and the other, and so on).

Bottom line: Fewer choices mean fewer mistakes for users.

Maintenance

If you've ever coded anything, this story might seem familiar to you. It's Friday evening, and you have an idea for a fun little improvement to your code. It seems like an easy feature to implement; you can practically see the code changes in your head. You have nothing better to do that evening, so you get to work. Three weeks later, you've implemented the feature, and all it took was a complete overhaul of your code.

This is not an uncommon developer story. It happens because code changes can have unanticipated ripple effects that you just don't foresee before making the change. In other words, code is expensive. The more code you write, the more you have to maintain. The less code you write, the fewer bugs you have to hunt.

The eye of the beholder

Most users customize their desktop with digital wallpaper. Beyond that, however, I expect most people use the desktop they've been given. So the desktop that GNOME and KDE developers provide is generally what people use, and in the end not just beauty but also the best workflow really are in the eye of the beholder.

I fall into a particular work style when I'm using KDE, and a different style of work when I use GNOME. After all, things are arranged in different locations (although I keep my KDE panel at the top of my screen partly to mimic GNOME's design), and the file managers and the layout of my virtual workspaces are different.

It's a luxury of open source to have arbitrary preferences for your tools. There's plenty to choose from, so you don't have to justify what you do or don't like about one desktop or another. If you try one and can't get used to it, you can always switch to the other.

Minimalism with Linux

I used to think that it made sense to use a tool with 100 options because you can just ignore the 95 that you don't need and focus on the five that you do. The more I use GNOME, however, the more I understand the advantages of minimalism. Reduced design helps some users focus on what matters, it helps others avoid confusion and mistakes due to a complex user interface (UI), and it helps developers maintain quality code. And some people just happen to prefer it.

There's a lesson here for users and developers alike, but it's not that one is better than the other. In fact, these principles apply to a lot more than just KDE and GNOME. User experience and developer experience are each important, and sometimes complexity is warranted while other times minimalism has the advantage.

Comparing two open source desktops side by side shows that both styles serve important purposes.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How dynamic linking for modular libraries works on Linux

Tue, 05/31/2022 - 15:00
How dynamic linking for modular libraries works on Linux Jayashree Hutt… Tue, 05/31/2022 - 03:00 Register or Login to like Register or Login to like

When you write an application using the C programming language, your code usually has multiple source files.

Ultimately, these files must be compiled into a single executable. You can do this by creating either static or dynamic libraries (the latter are also referred to as shared libraries). These two types of libraries vary in how they are created and linked. Both have advantages and disadvantages, depending on your use case.

Dynamic linking is the most common method, especially on Linux systems. Dynamic linking keeps libraries modular, so just one library can be shared between any number of applications. Modularity also allows a shared library to be updated independently of the applications that rely upon it.

In this article, I demonstrate how dynamic linking works. In a future article, I'll demonstrate static linking.

Linker

A linker is a command that combines several pieces of a program together and reorganizes the memory allocation for them.

The functions of a linker include:

  • Integrating all the pieces of a program
  • Figuring out a new memory organization so that all the pieces fit together
  • Reviving addresses so that the program can run under the new memory organization
  • Resolving symbolic references

As a result of all these linker functionalities, a runnable program called an executable is created. Before you can create a dynamically linked executable, you need some libraries to link to and an application to compile. Get your favorite text editor ready and follow along.

Create the object files

First, create the header file mymath.h with these function signatures:

int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);

Create add.c, sub.c , mult.c and divi.c with these function definitions. I'm placing all of the code in one code block, so divide it up among four files, as indicated in the comments:

// add.c
int add(int a, int b){
return (a+b);
}

//sub.c
int sub(int a, int b){
return (a-b);
}

//mult.c
int mult(int a, int b){
return (a*b);
}

//divi.c
int divi(int a, int b){
return (a/b);
}

Now generate object files add.o, sub.o, mult.o, and divi.o using GCC:

$ gcc -c add.c sub.c mult.c divi.c

The -c option skips the linking step and creates only object files.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Creating a shared object file

Dynamic libraries are linked during the execution of the final executable. Only the name of the dynamic library is placed in the final executable. The actual linking happens during runtime, when both executable and library are placed in the main memory.

In addition to being sharable, another advantage of a dynamic library is that it reduces the size of the final executable file. Instead of having a redundant copy of the library, an application using a library includes only the name of the library when the final executable is created.

You can create dynamic libraries from your existing sample code:

$ gcc -Wall -fPIC -c add.c sub.c mult.c divi.c

The option -fPIC tells GCC to generate position-independent code (PIC). The -Wall option isn't necessary and has nothing to do with how the code is compiling. Still, it's a valuable option because it enables compiler warnings, which can be helpful when troubleshooting.

Using GCC, create the shared library libmymath.so:

$ gcc -shared -o libmymath.so \
add.o sub.o mult.o divi.o

You have now created a simple example math library, libmymath.so, which you can use in C code. There are, of course, very complex C libraries out there, and this is the process their developers use to generate the final product that you or I install for use in C code.

Next, you can use your new math library in some custom code, then link it.

Creating a dynamically linked executable

Suppose you've written a command for mathematics. Create a file called mathDemo.c and paste this code into it:

#include
#include
#include

int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));

  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}

Notice that the first line is an include statement referencing, by name, your own libmymath library. To use a shared library, you must have it installed. If you don't install the library you use, then when your executable runs and searches for the included library, it won't be able to find it. Should you need to compile code without installing a library to a known directory, there are ways to override default settings. For general use, however, it's expected that libraries exist in known locations, so that's what I'm demonstrating here.

Copy the file libmymath.so to a standard system directory, such as /usr/lib64, and then run ldconfig. The ldconfig command creates the required links and cache to the most recent shared libraries found in the standard library directories.

$ sudo cp libmymath.so /usr/lib64/
$ sudo ldconfigCompiling the application

Create an object file called mathDemo.o from your application source code (mathDemo.c):

$ gcc -I . -c mathDemo.c

The -I option tells GCC to search for header files (mymath.h in this case) in the directory listed after it. In this case, you're specifying the current directory, represented by a single dot (.). Create an executable, referring to your shared math library by name using the -l option:

$ gcc -o mathDynamic mathDemo.o -lmymath

GCC finds libmymath.so because it exists in a default system library directory. Use ldd to verify the shared libraries used:

$ ldd mathDemo
    linux-vdso.so.1 (0x00007fffe6a30000)
    libmymath.so => /usr/lib64/libmymath.so (0x00007fe4d4d33000)
    libc.so.6 => /lib64/libc.so.6 (0x00007fe4d4b29000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fe4d4d4e000)

Take a look at the size of the mathDemo executable:

$ du ./mathDynamic
24   ./mathDynamic

It's a small application, of course, and the amount of disk space it occupies reflects that. For comparison, a statically linked version of the same code (as you'll see in my next article) is 932K!

$ ./mathDynamic
Enter two numbers
25
5

25 + 5 = 30
25 - 5 = 20
25 * 5 = 125
25 / 5 = 5

You can verify that it's dynamically linked with the file command:

$ file ./mathDynamic
./mathDynamic: ELF 64-bit LSB executable, x86-64,
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
with debug_info, not stripped

Success!

Dynamically linking

A shared library leads to a lightweight executable, as the linking happens during runtime. Because it resolves references during runtime, it does take more time for execution. However, since the vast majority of commands on everyday Linux systems are dynamically linked and on modern hardware, the time saved is negligible. Its inherent modularity is a powerful feature for developers and users alike.

In this article, I described how to create dynamic libraries and link them into a final executable. I'll use the same source code to create a statically linked executable in my next article.

Learn how to combine multiple C object files into single executable with dynamic libraries.

Image by:

Paul Lewin. Modified by Opensource.com. CC BY-SA 2.0

Programming Linux What to read next How to handle dynamic and static libraries in Linux Anyone can compile open source code in these three simple steps Dynamically linking libraries while compiling code This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Dynamically linking libraries while compiling code

Mon, 05/30/2022 - 15:00
Dynamically linking libraries while compiling code Seth Kenlon Mon, 05/30/2022 - 03:00 Register or Login to like Register or Login to like

Compiling software is something that developers do a lot, and in open source some users even choose to do it themselves. Linux podcaster Dann Washko calls source code the "universal package format" because it contains all the components necessary to make an application run on any platform. Of course, not all source code is written for all systems, so it's only "universal" within the subset of targeted systems, but the point is that source code is extremely flexible. With open source, you can decide how code is compiled and run.

When you're compiling code, you're usually dealing with multiple source files. Developers tend to keep different classes or modules in separate files so that they can be maintained separately, and possibly even used by different projects. But when you're compiling these files, many of them get compiled into a single executable.

This is usually done by creating shared libraries, and then dynamically linking back to them from the executable. This keeps the executable small by keeping modular functions external, and ensures that libraries can be updated independently of the applications that use them.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Locating a shared object during compilation

When you're compiling with GCC, you usually need a library to be installed on your workstation for GCC to be able to locate it. By default, GCC assumes that libraries are in a system library path, such as /lib64 and /usr/lib64. However, if you're linking to a library of your own that's not yet installed, or if you need to link to a library that's not installed in a standard location, then you have to help GCC find the files.

There are two options significant for finding libraries in GCC:

  • -L (capital L) adds an additional library path to GCC's search locations.
  • -l (lowercase L) sets the name of the library you want to link against.

For example, suppose you've written a library called libexample.so, and you want to use it when compiling your application demo.c. First, create an object file from demo.c:
 

$ gcc -I ./include -c src/demo.c

The -I option adds a directory to GCC's search path for header files. In this example, I assume that custom header files are in a local directory called include. The -c option prevents GCC from running a linker, because this task is only to create an object file. And that's exactly what happens:
 

$ ls
demo.o   include/   lib/    src/

Now you can use the -L option to set a path for your library, and compile:
 

$ gcc -L`pwd`/lib -o myDemo demo.o -lexample

Notice that the -L option comes before the -l option. This is significant, because if -L hasn't been added to GCC's search path before you tell GCC to look for a non-default library, GCC won't know to search in your custom location. The compilation succeeds as expected, but there's a problem when you attempt to run it:
 

$ ./myDemo
./myDemo: error while loading shared libraries:
libexample.so: cannot open shared object file:
No such file or directoryTroubleshooting with ldd

The ldd utility prints shared object dependencies, and it can be useful when troubleshooting issues like this:

$ ldd ./myDemo
        linux-vdso.so.1 (0x00007ffe151df000)
        libexample.so => not found
        libc.so.6 => /lib64/libc.so.6 (0x00007f514b60a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f514b839000)

You already knew that libexample couldn't be located, but the ldd output at least affirms what's expected from a working library. For instance, libc.so.6 has been located, and ldd displays its full path.

LD_LIBRARY_PATH

The LD_LIBRARY_PATH environment variable defines the path to libraries. If you're running an application that relies on a library that's not installed to a standard directory, you can add to the system's library search path using LD_LIBRARY_PATH.

There are several ways to set environment variables, but the most flexible is to place them before you run a command. Look at what setting LD_LIBRARY_PATH does for the ldd command when it's analyzing a "broken" executable:
 

$ LD_LIBRARY_PATH=`pwd`/lib ldd ./
   linux-vdso.so.1 (0x00007ffe515bb000)
   libexample.so => /tmp/Demo/lib/libexample.so (0x0000...
   libc.so.6 => /lib64/libc.so.6 (0x00007eff037ee000)
   /lib64/ld-linux-x86-64.so.2 (0x00007eff03a22000)

It applies just as well to your custom command:
 

$ LD_LIBRARY_PATH=`pwd`/lib myDemo
hello world!

If you move the library file or the executable, however, it breaks again:
 

$ mv lib/libexample.so ~/.local/lib64
$ LD_LIBRARY_PATH=`pwd`/lib myDemo
./myDemo: error while loading shared libraries...

To fix it, you must adjust the LD_LIBRARY_PATH to match the library's new location:
 

$ LD_LIBRARY_PATH=~/.local/lib64 myDemo
hello world!When to use LD_LIBRARY_PATH

In most cases, LD_LIBRARY_PATH isn't a variable you need to set. By design, libraries are installed to /usr/lib64 and so applications naturally search it for their required libraries. You may need to use LD_LIBRARY_PATH in two cases:

  • You're compiling software that needs to link against a library that itself has just been compiled and has not yet been installed. Good build systems, such as Autotools and CMake, can help handle this.
  • You're bundling software that's designed to run out of a single directory, with no install script or an install script that places libraries in non-standard directories. Several applications have releases that a Linux user can download, copy to /opt, and run with "no install." The LD_PATH_LIBRARY variable gets set through wrapper scripts so the user often isn't even aware it's been set.

Compiling software gives you a lot of flexibility in how you run your system. The LD_LIBRARY_PATH variable, along with the -L and -l GCC options, are components of that flexibility.

Compiling software gives you a lot of flexibility in how you run your system. The LD_LIBRARY_PATH variable, along with the -L and -l GCC options, are components of that flexibility.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Programming What to read next Anyone can compile open source code in these three simple steps A programmer's guide to GNU C Compiler This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I automate plant care using Raspberry Pi and open source tools

Sat, 05/28/2022 - 15:00
How I automate plant care using Raspberry Pi and open source tools Kevin Sonney Sat, 05/28/2022 - 03:00 Register or Login to like Register or Login to like

Automation is a hot topic right now. In my day job as an SRE part of my remit is to automate as many repeating tasks as possible. But how many of us do that in our daily, not-work, lives? This year, I am focused on automating away the toil so that we can focus on the things that are important.

Home Assistant has so many features and integrations, it can be overwhelming at times. And as I’ve mentioned in previous articles, I use it for many things, including monitoring plants.

$ bluetoothctl scan le
Discovery started
[NEW] Device
[NEW] Device
[NEW] Device
[NEW] Device
[NEW] Device
[NEW] Device
[NEW] Device

There are numerous little devices you can buy to keep an eye on your plants. The Xiomi MiaFlora devices are small, inexpensive, and have a native integration with Home Assistant. Which is great—as long as the plant and Home Assistant are in the same room.

More on automation Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible More articles about open source automation

We've all been in places where one spot there is a great signal, and moving 1mm in any direction makes it a dead zone—and it is even more frustrating when you are indoors. Most Bluetooth LE (Low Energy) devices have a range of about 100m, but that's using line of sight, and does not include interference from things like walls, doors, windows, or major appliances (seriously, a refrigerator is a great big signal blocker). Remote Home Assistant is perfect for this. You can set up a Raspberry Pi with Home Assistant Operating System (HASSOS) in the room with the plants, and then use the main Home Assistant as a central control panel. I tried this on a Raspberry Pi Zero W, and while the Pi Zero W can run Home Assistant, it doesn't do it very well. You probably want a Pi 3 or Pi 4 when doing this.

Start with a fresh HASSOS installation, and make sure everything is up-to-date, then install HACS and Remote Home Assistant like I did in my article Automate and manage multiple devices with Remote Home Assistant. Now for the tricky bits. Install the SSH and Web Terminal Add-on, and turn off Protection Mode so that you can get a session on the base OS and not in a container. Start the add-on, and it appears on the sidebar. Click on it to load the terminal.

You are now in a root session terminal on the Pi. Insert all the warnings here about being careful and how you can mess up the system (you know the ones). Inside the terminal, run bluetoothctl scan le to find the plant sensor, often named "Flower Care" like mine.

Image by:

(Kevin Sonney, CC BY-SA 40)

Make a note of the address for the plant sensor. If you have more than one, it could be confusing to figure out which is which, and can take some trial and error. Once you've identified the plant sensor, it is time to add it to Home Assistant. This requires editing the configuration.yml file directly, either with the file editor add on, or in the terminal you just created. In my case, I added both a sensor and a plant block to the configuration.

sensor:
  - platform: miflora
    scan_interval: 60
    mac: "C4:7C:8D:6C:DE:FE"
    name: "pitcher_plant"
    plant:
    pitcher_plant:
        sensors:
            moisture: sensor.pitcher_plant_moisture
            battery: sensor.pitcher_plant_battery
            temperature: sensor.pitcher_plant_temperature
            conductivity: sensor.pitcher_plant_conductivity
            brightness: sensor.pitcher_plant_brightness

Save the file, and restart Home Assistant, and you should see a plant card on the Overview tab.

Image by:

(Kevin Sonney, CC BY-SA 40)

Once that's done, go back to the main Home Assistant, and add the newly available plant component to the list of things to import from the remote. You can then add the component to dashboards on the main HASS installation, and create automations and notifications based on the plant status.

I use this to monitor a pitcher plant, and I have more sensors on the way so I can keep tabs on all my houseplants—all of which live outside the Bluetooth range of my central Home Assistant Pi.

I keep tabs on all my houseplants by using Home Assistant and a Raspberry Pi.

Image by:

Opensource.com

Automation Home automation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages