opensource.com

Subscribe to opensource.com feed
Updated: 1 min 53 sec ago

Doing 64-bit math on a 16-bit system

Wed, 10/26/2022 - 15:00
Doing 64-bit math on a 16-bit system Jerome Shidel Wed, 10/26/2022 - 03:00

A few years ago, I wrote a command-line math program for FreeDOS called VMATH. It was capable of performing only extremely simple mathematical operations on very small unsigned integers. With some recent interest in basic math in the FreeDOS community, I improved VMATH to provide basic math support on signed 64-bit integers.

The process of manipulating big numbers using only 16-bit 8086 compatible assembly instructions is not straightforward. I would like to share some samples of the techniques used by VMATH. Some of the methods used are fairly easy to grasp. Meanwhile, others can seem a little strange. You may even learn an entirely new way of performing some basic math.

The techniques explained here to add, subtract, multiply, and divide 64-bit integers are not limited to just 64-bits. With a little basic understanding of assembly, these functions could be scaled to do math on integers of any bit size.

Before digging into those math functions, I want to cover some basics of numbers from the computer's perspective. 

How computers read numbers

An Intel-compatible CPU stores the value of numbers in bytes from least to most significant. Each byte is made up of 8 binary bits and two bytes make up a word.

A 64-bit number that is stored in memory uses 8 bytes (or 4 words). For example, a value of 74565 (0x12345 in hexadecimal) looks something like this:

as bytes: db 0x45, 0x23, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00
as words: dw 0x2345, 0x0001, 0x0000, 0x0000

When reading or writing data to memory, the CPU processes the bytes in the correct order. On a processor more modern than an 8086, there can be larger groups, such as a quadword which can represent the entire 64-bit integer as 0x0000000000012345.

The 8086 CPU doesn't understand such gigantic numbers. When writing a program for FreeDOS, you want something that can run on any PC, even an original IBM PC 5150. You also want to use techniques that can be scaled to any size integer. The capabilities of a more modern CPU do not really concern us.

For the purpose of doing integer math, the data can represent two different types of numbers.

The first is unsigned which uses all of its bit to represent a positive number. Their value can be from 0 up to (2 ^ (numberofbits) - 1). For example, 8 bits can have any value from 0 to 255, with 16 bits ranging from 0 to 65535, and so on.

Signed integers are very similar. However, the most significant bit of the number represents whether the number is positive (0) or negative (1). The first portion of the number is positive. It can range from 0 up to (2 ^ (numberofbits - 1) - 1). The negative portion follows the positive, ranging from its lowest value (0-(2 ^ (numberofbits - 1))) up to -1.

For example, an 8-bit number represents any value from 0 to 127 in the positive range, and -128 through -1 in the negative range. To help visualize it, consider the byte as the set of numbers [0…127,-128…-1]. Because -128 follows 127 in the set, adding 1 to 127 equals -128. While this may seem strange and backward, it actually makes doing basic math at this level much easier.

To perform basic addition, subtraction, multiplication, and division of very big integers, you should explore some simple routines to get a number's absolute or negative value. You will need them once you start doing math on signed integers.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Absolute and negative values

Getting the absolute value of a signed integer is not as bad as it may first seem. Because of how unsigned and signed numbers are represented in memory, there is a fairly easy solution. You can simply invert all the bits of a negative number and add 1 to get the result.

That might sound odd if you haven't worked in binary before, but that is how works. To give you an example, take an 8-bit representation of a negative number, such as -5. Since it would be near the end of the [0…127,-128…-1] byte set, it would have a value of 0xfb in hexadecimal, or 11111011 in binary. If you flip all the bits, you get 0x04, or 00000100 in binary. Add 1 to that result and you have the answer. You just changed the value from -5 to +5.

You can write this procedure in assembly to return the absolute value of any 64-bit number:

; syntax, NASM for DOS
proc_ABS:
  ; on entry, the SI register points to the memory location in the
  ; data segment (DS) for the program containing the 64-bit
  ; number that will be made positive.
  ; On exit, the Carry Flag (CF) is set if resulting number can
  ; not be made positive. This only happens with maximum
  ;  negative value. Otherwise, CF is cleared.

  ; check most significant bit of highest byte
  test [si+7], byte 0x80

  ; if not set, the number is positive
  jz .done_ABS

  ; flip all the bits of word #4
  not word [si+6]
  not word [si+4]       ; word #3
  not word [si+2]       ; word #2
  not word [si]                 ; word #1

  ; increment the 1st word
  inc word [si]

  ; if it did not roll over back to zero, done
  jnz .done_ABS

  ; increment the 2nd word
  inc word [si+2]

  ; if it rolled over, increment the next word
  jnz .done_ABS
  inc word [si+4]
  jnz .done_ABS

  ; this cannot roll over
  inc word [si+6]

  ; check most significant bit once more
  test [si+7], byte 0x80

  ; if it is not set we were successful, done
  jz .done_ABS

  ; overflow error, it reverted to Negative
  stc

  ; set Carry Flag and return
  ret

.done_ABS:
  ; Success, clear Carry Flag and return
  clc
  ret

As you may have noticed in the example, there is an issue that can occur in the function. Because of how positive and negative numbers are represented as binary values, the maximum negative number cannot be made positive. For 8-bit numbers, the maximum negative value is -128. If you flip all of the bits for -128 (binary 1__0000000), you get 127 (binary 0__1111111) the maximum positive value. If you add 1 to that result, it will overflow back to the same negative number (-128).

To turn a positive number negative, you can just repeat the process you used to get the absolute value. The example procedure is very similar, except you want to make sure the number is not already negative at the start.

; syntax, NASM for DOS

proc_NEG:
  ; on entry, the SI points to the memory location
  ; for the number to be made negative.
  ; on exit, the Carry Flag is always clear.

  ; check most significant bit of highest byte
  test [si+7], byte 0x80

  ; if it is set, the number is negative
  jnz .done_NEG

  not word [si+6]       ; flip all the bits of word #4
  not word [si+4]       ; word #3
  not word [si+2]       ; word #2
  not word [si]                 ; word #1
  inc word [si]                 ; increment the 1st word

  ; if it did not roll over back to zero, done
  jnz .done_NEG

  ; increment the 2nd word
  inc word [si+2]

  ; if it rolled over, increment the next word
  jnz .done_NEG
  inc word [si+4]
  jnz .done_NEG

  ; this cannot roll over or revert back to
  inc word [si+6]
  ; positive.

.done_NEG:
  clc                   ; Success, clear Carry Flag and return
  ret

With all of that shared code between the absolute and negative functions, they should be combined to save some bytes. There are additional benefits when such code is combined. For one, it helps prevent simple typographic errors. It also can reduce testing requirements. Moreover, the source generally becomes easier to read, follow, and understand. Sometimes with a long series of assembly instructions, it is easy to lose track of what is actually happening. But for now, we can move along.

Getting the absolute or negative value of a number was not very difficult. But, those functions will be critically important later on when we start doing math on signed integers.

Now that I've covered the basics of how integer numbers are represented at the bit level and created a couple of basic routines to manipulate them a little, we can get to the fun stuff.

Let's do some math!

With a little basic understanding of assembly, these functions could be scaled to do math on integers of any bit size.

Image by:

Opensource.com

FreeDOS Command line What to read next Play a fun math game with Linux commands This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Tips for using the Linux test command

Wed, 10/26/2022 - 15:00
Tips for using the Linux test command Seth Kenlon Wed, 10/26/2022 - 03:00

The [ command, often called a "test," is a command from the GNU Core Utils package, and initiates a conditional statement in Bash. Its function is exactly the same as the test command. When you want to execute a command only when something is either true or false, use the [ or the test command. However, there's a significant difference between [ or test and [[, and there's a technical difference between those commands and your shell's versions of them.

[ vs test commands in Linux

The [ and the test commands, installed by the GNU Core Utils package, perform the same function using a slightly different syntax. (You might find it difficult to search for documentation using the single left-square bracket character, however, so many users find test easier to reference.) Bash and similar shells happen to also have the [ and the test commands built-in, and the built-in versions supersede the ones installed in /usr/bin. In other words, when you use [ or test, you're probably not executing /usr/bin/[ or /usr/bin/test. Instead, you're invoking what's essentially a function of your Bash shell.

You might wonder why [ or test exist in /usr/bin at all. Some shells, such as tcsh, don't have [ and test built-in, so if you want to use those commands in that shell, you must have them installed as separate binaries.

The bottom line is that as long as you don't get an error when you type a command starting with [ or test, then you've got everything you need. It almost never matters whether your shell or your bin directory is providing the commands.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Testing for a file

It's common to want to know whether a file exists, often so you can confidently proceed with some action, or so you can avoid "clobbering" it with a file of the same name. In an interactive shell session, you can just look to see whether the file exists but in a shell script, you need the computer to determine that for itself. The -e option tests whether a file exists, but its apparent response is the same either way.

$ touch example
$ test -e example
$ test -e notafile
$

The [ and test commands are essentially switches. They emit a true or false response, but considers both of them as success. You can put this to use by pairing the commands with logical operators, such as && and ||. The && operator is executed when a response is true:

$ touch example
$ test -e example && echo "foo"
foo
$ test -e notafile && echo "foo"
$

The || operator executes when a response is false:

$ touch example
$ test -e example || echo "foo"
$ test -e notafile || echo "foo"
foo
$

If you prefer, you can use square brackets instead of test. In all cases, the results are the same:

$ touch example
$ [ -e example ] && echo "foo"
foo
$ [ -e notafile ] && echo "foo"
$Testing for file types

Everything in Linux is a file, so when you can test for the existence of a directory with the -e option, the same way you test for a file. However, there are different kinds of files, and sometimes that matters. You can use [ or test to detect a variety of different file types:

  • -f: regular file (returns false for a directory)

  • -d: directory

  • -b: block (such as /dev/sda1)

  • -L or -h: symlink

  • -S: socket

There are more, but those tend to be the most common.

Testing for file attributes

You can also look at metadata of a file:

  • -s: a file with the size greater than zero

  • -N: a file that's been modified since it was last read

You can test by ownership:

  • -O: a file owned by the current primary user

  • -G: a file owned by the current primary group

Or you can test by permissions (or file mode):

  • -r: a file with read permission granted

  • -w: a file with write permission granted

  • -x: a file with execute permission granted

  • -k: a file with the sticky bit set

Combining tests

You don't always just have to test for a single attribute. The -a option ("and") allows you to string several tests together, with the requirement that all tests return as true:

$ touch zombie apocalypse now
$ test -e zombie -a -e apocalypse -a -e now && echo "no thanks"
no thanks

If any expression fails, then the test returns false:

$ touch zombie apocalypse now
$ test -e plant -a -e apocalypse -a -e now && echo "no thanks"
$

The -o option ("or") requires that one expression is true:

$ touch zombie apocalypse now
$ test -e zombie -o -e plant -o -e apocalypse && echo "no thanks"
no thanksInteger tests

You can also test integers. That's not necessarily directly useful (you probably inherently know that 0 is less than 1, for instance) but it's invaluable when you're using variables in a script.

The operators are fairly intuitive once you understand the schema:

  • -eq: equal to

  • -ne: not equal

  • -ge: greater than or equal to

  • -gt: greater than

  • -le: less than or equal to

  • -lt: less than

Here's a simple example:

$ nil=0
$ foo=1
$ test $foo -eq $nil || echo "Those are not equal."
Those are not equal.
$ test $foo -eq 1 && echo "Those are equal."

Of course, you can combine tests.

$ touch example
$ test $foo -ne $nil -a -e example -o -e notafile && echo "yes"
yesTesting testing

The [ and test commands are vital conditional statements when scripting. These are easy and common ways to control the flow of your code. There are yet more tests available than what I've covered in this article, so whether you used Bash, tcsh, ksh, or some other shell entirely, take a look at the man page to get the full spectrum of what these commands offer.

The [ and test commands are vital conditional statements when scripting.

Image by:

Opensource.com

Linux Command line What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Transfer files and folders from Windows to Linux with PSCP

Tue, 10/25/2022 - 15:00
Transfer files and folders from Windows to Linux with PSCP Paul Tue, 10/25/2022 - 03:00

Are you looking for a way to quickly transfer files from your Windows computer to your Linux computer and back again? The open source PSCP utility makes it easy to transfer files and folders, and of course it's open source.

Setting your PATH in Windows

Knowing how to set your command path in Windows makes it easier to use a handy utility like PSCP. If you're unfamiliar with that process, read how to set a PATH on Windows.

Using PSCP

PSCP (PuTTY Secure Copy Protocol) is a command-line tool for transferring files and folders from a Windows computer to a Linux computer.

  1. Download pscp.exe from its website.

  2. Move pscp.exe to a folder in your PATH (for example, Desktop\App if you followed the PATH tutorial here on Opensource.com). If you haven't set a PATH variable for yourself, you can alternately move pscp.exe to the folder holding the files you're going to transfer.

  3. Open Powershell on your Windows computer using the search bar in the Windows taskbar (type 'powershell` into the search bar.)

  4. Type pscp –version to confirm that your computer can find the command.

IP address

Before you can make the transfer, you must know the IP address or fully-qualified domain name of the destination computer. Assuming it's a computer on your same network, and that you're not running a DNS server to resolve computer names, you can find the destination IP address using the ip command on the Linux machine:

[linux]$ ip addr show | grep 'inet '
inet 127.0.0.1/8 scope host lo
inet 192.168.1.23/24 brd 10.0.1.255 scope global noprefixroute eth0

In all cases, 127.0.0.1 is a loopback address that the computer uses only to talk to itself, so in this example the correct address is 192.168.1.23. On your system, the IP address is likely to be different. If you're not sure which is which, you can try each one in succession until you get the right one (and then write it down somewhere!)

Alternately, you can look in the settings of your router, which lists all addresses assigned over DHCP.

Firewalls and servers

The pscp command uses the OpenSSH protocol, so your Linux computer must be running the OpenSSH server software, and its firewall must allow SSH traffic.

If you're not sure whether your Linux machine is running SSH, then run this command on the Linux machine:

[linux]$ sudo systemctl enable --now sshd

To ensure your firewall allows SSH traffic, run this command:

[linux]$ sudo firewall-cmd --add-service ssh --permanent

For more information on firewalls on Linux, read Make Linux stronger with firewalls.

Transfer the file

In this example, I have a file called pscp-test.txt that I want to transfer from C:\Users\paul\Documents on my Windows computer to my destination Linux computer home directory /_home_/paul.

Now that you have the pscp command and the destination address, you're ready to transfer the test file pscp-test.txt. Open Powershell and use the dir command to change to the Documents folder, where the sample file is located:

PS> dir %USERPROFILE%\Documents\

Now execute the transfer:
 

PS> pscp pscp-test.txt paul@192.168.1.23:/home/paul
| Password:
End of keyboard-interactive prompts from server
pscp-test.txt | 0 kb | 0.0 kB/s | ETA: 00:00:00 | 100%

Here's the syntax, word for word:

  • pscp: The command used to transfer the file.

  • pscp-test.txt is the name of the file you want to transfer from Windows.

  • paul@192.168.1.23 is my username on the Linux computer, and the IP address of the Linux computer. You must replace this with your own user and destination information. Notice that pscp requires a destination path on the target computer, and :/home/paul at the end of the IP address specifies that I want the file copied to my home folder.

After you authenticate to the Linux computer, the pscp-test.txt file is transferred to the Linux computer.

[ Related read Share files between Linux and Windows computers ]

Verifying the transferred

On your Linux computer, open a terminal and use the ls command to verify that the file pscp-test.txt appears in your home directory.
 

[linux]$ ls
Documents
Downloads
Music
Pictures
pscp-test.txt

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Copying a file off of a Linux system

You aren't limited to just copying files to your Linux system. With pscp, you can also copy a file from Linux onto Windows. The syntax is the same, only in reverse:

PS> pscp paul@192.168.1.23:/home/paul/pscp-test.txt %USERPROFILE%\Documents\pscp-win.txt

Here's the syntax:

  • pscp: The command used to transfer the file.

  • paul@192.168.1.23:/home/paul/pscp-test.txt is my username on the Linux computer, the IP address of the Linux computer, and the path to the file I want to copy.

  • %USERPROFILE%\Documents is the location on my Windows computer where I want to save the file. Notice that in copying the file back to my Windows computer, I can give it a new name, such as pscp-win.txt, to differentiate it from the original. You don't have to rename the file, of course, but for this demonstration it's a useful shortcut.

Open your file manager to verify that the pscp-win.txt file was copied to the Windows C:\Users\paul\Documents path from the Linux computer.

Image by:

(Paul Laubscher, CC BY-SA 4.0)

Remote copying

With the power of the open source pscp command, you have access to any computer in your house, and servers you have accounts on, and even mobile and edge devices.

The open source PSCP utility makes it easy to transfer files and folders between Windows and Linux computers.

Image by:

Opensource.com

Linux Windows What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How innovative Open Organization charts work in practice

Tue, 10/25/2022 - 15:00
How innovative Open Organization charts work in practice Ron McFarland Tue, 10/25/2022 - 03:00

In the first part of this series, I illustrated what an Open Organization chart looks like based on the book Team of Teams, by Stanley McChrystal. In this second and final part, I explore concerns about the information flow using this chart and give some examples of how it might work and (possibly unknowingly) has worked in the past.

Building team-to-team adaptability, collaboration, and transparency

Sometimes activities fit exactly with a single specialty and group responsibility. Other times these responsibilities overlap, with one party having primary responsibility and another playing a more minor but important, supportive role. Both must know each other's actions to be successful.

Image by:

(Taken from Team of Teams, page 129 and modified by Ron McFarland)

Efficient communication does not go up a vertical line to top management. Instead, many connections are crisscrossing through related teams with a lot of confirming overlap. These communication paths are only between groups that impact each other. This redundant overlapping may retard efficiency but result in improved adaptability. It is adept at responding instantly and creatively to unexpected events.

First, create oneness within each team.

Second, a team should establish communication with other teams impacted by its performance. Notice the light blue members sitting in another team. They are learning that team's environment on the one hand and representing their own team on the other. They are building a bridge between teams.

Finally, all teams should have cohesiveness within the entire organization. If there is great transparency and collaboration between them, far more activities will be conducted concurrently instead of sequentially, speeding up execution. Also, real-time transparency will allow rapid adaptability in any situation. As teams increase, their connections should increase exponentially to maintain rapid collaboration, adaptability, and transparency.

Each one of those teams has a direct, specific purpose. The link to other teams means that their performance and results will impact other groups. Knowing another team's working environment and how well they are doing helps them complete their assignments better.

This horizontal communication link is faster than going up a vertical organization chart. These interactions are vital in an age where speed and adaptability are critical.

According to McChrystal, you can't make one extra large team to create overall organizational adaptability. You must create high-functioning smaller teams where everyone knows what others are doing. This approach can be easily implemented in a group of up to about 25 members. Up to 50 members may be barely doable, but it's doubtful for 100 members and impossible in an organization with thousands of members. Therefore, you must create many low-head-count teams.

Learn about open organizations Download resources Join the community What is an open organization? How open is your organization? Where is the leadership?

Leadership is moving all the time, depending on the challenge and goals of each team. I wrote about this in my article, "What is adaptive leadership?" In any situation, the team considers who they will follow. More than likely, a person on the frontline who is the most experienced and qualified to guide the other team members will become the leader that people will follow. I think this is true in the teams that McChrystal is talking about.

The environment of accelerating speed, swelling complexity, and interdependence has forced organizations to be more adaptable, more collaborative, and more transparent than in the past. Therefore, a frontline respected team member must be a hands-on leader.

Team representation on other teams

Knowing how your team impacts other teams and giving other teams the ability to sense your team's situation will determine success or failure when agile adaptability is required.  This is a critical point to improve collaboration and coordination between all team's activities.

Image by:

(Taken from Team of Teams, page 129 and modified by Ron McFarland)

You don't need to have everyone know every member of the other teams in the organization. You only need one quality team member (a representative) to know all the other related teams. That one representative can present and speak for other team members on how their work impacts other teams. In the organization chart above, imagine temporarily having one of the light blue team members on another team.

This representative should not be adopted as a full-functioning new member, though. They are not leaving one team for another but just serving as a supportive observer.

When selecting a representative from your team to observe and work with another team, choose someone with effective communication skills to speak for your team. Remember, the other team members may be very different from your own. An example of this is an analyst and a frontline soldier. A representative must be able to be accepted by them, which might require doing manual support tasks for a time. McChrystal calls those that take these assignments "Liaison Officers (LNOs)." He calls the projects "embedding and liaison programs." An LNO doesn't need to have any of the skills needed in the other group but must have the ability to understand their situation.  Also, when an LNO returns to your team, that person should report on how your team's work impacts the other team. McChrystal suggests this can solve the problem of "out of sight, out of mind" by making the situation of those other teams come alive.

Furthermore, it might be helpful to create common areas where members from different teams can casually gather and share information. You may also lay out the physical facility so members often walk by other groups and get a glimpse at what they're up to. Possibly even a casual discussion could develop. It could maximize the cross-pollination of ideas. Teams could even have a visual activity board, so members of other groups could see at a glance what one another is up to.

To encourage collaboration, McChrystal established an "Operation and intelligence briefing" that happened six days a week (and was never canceled). He invested in mobile teleconferencing equipment and a briefing room. At least one member of each team had to participate in each meeting. The more, the better.

Members had to sit with other groups to understand better what was happening in different teams and watch them work. Getting briefings is good but not as useful as observing the skills of another unit in action.

This idea reminds me of my days just out of graduate school in Japan, entering my first large Japanese company. It was company policy for university graduates to work in two or three related departments before settling into a specialty in one department. This approach gave me a feeling for other departments' activities. To get things done, the kacho (section chief) was the real achiever, the critical team representative. On the organization chart, the kacho communicated with other kachos at their horizontal level. Communicating with superiors was often a waste of time.

From small teams to large organizations

To scale adaptability, collaboration, transparency, and community purpose across an organization, you have to link and connect teams with the other teams they impact. One team can anticipate what others are doing and better prepare themselves if assignments come to their team. By knowing the overall situation, one can anticipate concerns before they become major problems. McChrystal presents a massive organization like NASA. NASA employees initially complained about the added level of collaboration and information-sharing transparency, as it was extra work. But, after seeing the value of their efforts, the complaints stopped.

In addition to the complaints about extra work, competition issues might also arise. There could be environments where teams compete against each other. There are times when competition is very productive and motivating. In other cases, it's disruptive and counter-productive. I wrote about this in my articles "What determines how collaborative you'll be?" and "To compete or to collaborate? 4 criteria for making the call." In most organizations working in extremely volatile, interdependent situations with many unknowns, team-to-team cooperation is far more productive. Every team member has to know where they stand and how they impact all branches of the organization. You must convince members that it is their responsibility to know and support other teams.

Considering overall community purpose, these representatives can share where team goals reside within the overall organizational goals. This approach broadens overall community purpose and can be more helpful to other teams. A team could be extremely productive but counter-productive to other groups without continual communication. With today's technology, this real-time communication is effortlessly possible.

This is not to say that each person and team will become a generalist in many skills. They maintain their expertise but know what others are doing within their specialties.

Team-to-team, stakeholder-to-stakeholder

Teams not only communicate and collaborate with each other within their organization, but they also communicate and collaborate well with outside stakeholders that are dependent on their activities. I gave a presentation on stakeholders, which stresses a company's surroundings.

Shared understanding, empowered decision-making, and rapid execution create adaptability. That adaptability can successfully address unpredictability. Outside stakeholders impacted by teams must play a role in that success. They are part of the community.

Automotive industry adaptation example

It is not easy for an organization to stop doing what was successful in the past and start doing something completely new, but sometimes adapting is required. Automotive dealers with service departments and parts departments have to carry auto parts. When market demand falls, these departments must stop stocking low-demand automotive parts. Also, the parts maker that supplies the dealer has to stop producing and stocking them. To be adaptive, service departments can help keep older vehicles in operation by developing parts modification systems that can convert a specialty item into a universal component to be installed in a wide variety of older vehicle models. It is more expensive than volume production but far cheaper than stocking products with no demand. Just like modifying parts, management sometimes has to be modified with a changing working environment.

Joint production in China, a personal case study

I worked on a project to rent a building in China to an American company so that it could manufacture in China and export worldwide. It was such a small company that it couldn't just start manufacturing independently. Also, it had no expertise in working with the Chinese government. So we created a division in my Japanese company to handle the operation.

Consider my Japanese company as one team and that American company as another team. Our lawyer told me that to make this project work, responsibility should be very carefully negotiated and decided on. I created a Major responsibility/minor responsibility table. Once completed, I had both presidents sign it as part of the agreement. As these parties had to act like one company, I did not want one party to be totally responsible for an activity and the other party not responsible at all.

To make things more complicated, there were Japanese employees in Japan, with Chinese employees in China working for the Japanese company, and American employees in the USA, with Chinese employees in China working indirectly for the American company. I had to bring them all together to work jointly.

Image by:

(Taken from Team of Teams, page 129 and modified by Ron McFarland)

Study the organization chart above and consider the following points:

  • The American company in the USA handled all marketing and sales.
  • The staff in China did all the Chinese personnel activities, including hiring and training.
  • The American company handled all of the product development and production process development.
  • The staff in China made all equipment and production raw materials purchases, but it was strictly on behalf of the American company.
  • The Chinese operation supervised all production in China.
  • The Japanese company controlled all the facilities in China.
  • The staff in Japan and the USA coordinated all of the billing and financing.
  • The leaders were the heads of each team in each process.
  • The financing was initially handled by the Chinese company and then invoiced to the American company.

This collaboration was a total success for all parties. The Japanese company received money for space rental, a commission on employee dispatch, a commission on material purchases, and a general administration fee for export processing and other activities. The American company was profitable on their products exported worldwide. The Chinese company maintained profitable operations. That project went on for eight years, and the Chinese government didn't even know the American company existed, as all their activities fell within a division of my Japanese company. After that, the American company confidently set up its own independent production facility in China.

Team of Teams: Stronger together than each of its parts

A team of teams organization leads to a system of trust that serves the common good. This structure is best suited to meet this rapidly changing world by being more adaptive. There are requirements, though:

  1. Radical sharing of information (transparency).
  2. Extreme decentralization of decision-making authority and execution.
  3. Mutual bonds of trust and communication with other groups and teams.
  4. Interconnected communication system that can service the entire community that creates a shared consciousness. Knowing other people in other teams (functions and responsibilities) and establishing multiple paths to get things done makes teams more adaptive in crises. A single best path is too fragile.
  5. A community built on associations that both contribute to and benefit from their efforts directly or indirectly.
Wrap up

I've presented you with an image of a new organizational chart. Unofficially, to get things done, horizontal communication has been going on for decades. The difference is that updates happen within minutes, not just at weekly or monthly meetings.

Here are examples of how this new organizational chart can function in the real world.

Image by:

Opensource.com

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to display commits created on a specific day with the git log command

Mon, 10/24/2022 - 15:00
How to display commits created on a specific day with the git log command Agil Antony Mon, 10/24/2022 - 03:00

The git log command offers many opportunities to learn more about the commits made by contributors. One way you might consume such information is by date. To view commits in a Git repository created on a specific date or range of dates, use the git log command with the options --since or --until, or both.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

First, checkout the branch you want to inspect (for example, main):

$ git checkout main

Next, display the commits for the current date (today):

$ git log --oneline --since="yesterday"

Display commits for the current date by a specific author only (for example, Agil):

$ git log --oneline --since="yesterday" --author="Agil"

You can also display results for a range of dates. Display commits between any two dates (for example, 22 April 2022 and 24 April 2022):

$ git log --oneline --since="2022-04-22" --until="2022-04-24"

In this example, the output displays all the commits between 22 April 2022 and 24 April 2022, which excludes the commits done on 22 April 2022. If you want to include the commits done on 22 April 2022, replace 2022-04-22 with 2022-04-21.

Run the following command to display commits between any two dates by a specific author only (for example, Agil):

$ git log --oneline --since="2022-04-22" \
--until="2022-04-24" --author="Agil"Reporting

Git has many advantages, and one of them is the way it enables you to gather data about your project. The git log command is an important reporting tool and yet another reason to use Git!

The git log command is an important reporting tool and yet another reason to use Git.

Image by:

by Dafne Cholet. CC BY-SA 2.0.

 

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A PWA is the web browser

Mon, 10/24/2022 - 15:00
A PWA is the web browser Alex Borsody Mon, 10/24/2022 - 03:00

A progressive web app (PWA) is a web application that uses modern web technologies to deliver a user experience equal to any mobile app. An active open source community, in conjunction with tech leaders like Google and Microsoft, pushes the PWA agenda forward in an effort to "bridge the app gap."

Basically, a PWA runs your app in a web browser. Because there's essentially a two-party system of the Play and App stores, the focus is on two browsers: Google Chrome and Apple Safari (built on top of the open source Chromium and WebKit, respectively).

I won't be covering creating desktop apps. For more information on that topic, look into Electron.

PWAs are built the same way as any website or web app. They use the latest mobile technologies and implement UX best practices. PWAs can also hook the browser in with native code to improve the experience.

If you type "What is a PWA" in your favorite search engine, you'll probably get a stock response similar to "PWAs are designed to be fast, reliable, and engaging, with the ability to work offline and be installed on a device's home screen." While this is partly true, it's just the tip of the iceberg for what a PWA has the potential to be and what it's evolving into, even as I write this article.

What is not a PWA

The following are cross-platform app frameworks allowing you to develop from a single codebase. They do not use the browser as their platform.

  • Flutter
  • React Native

Flutter uses a language called Dart, which compiles to iOS, Android, and web packages. React Native does the same but compiles JavaScript on the backend.

What is a PWA by definition?

A PWA, by its original definition, must meet these three requirements:

  • Service worker: Provides offline functionality.
  • Web manifest: JSON markup to configure home screen and app icons.
  • Security: HTTPS is enforced, because a service worker runs in the background.

These components allow you to pass the Google Lighthouse PWA audit and get the green checkmark on your score.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Once you satisfy these requirements, Chrome's "add to home screen" prompt is also automatically enabled.

PWA Builder (a free service provided by Microsoft) has an excellent UI for building a PWA and visualizing base requirements. See the following example based on developers.google.com. You can demo this functionality here provided by the PWA module I discussed in my previous article.

Image by:

(Alex Borsody, CC BY-SA 4.0)

Image by:

(Alex Borsody, CC BY-SA 4.0)

The base requirements of a PWA allow offline behavior through the service worker, and the manifest.json file allows "add to home screen" behavior on Android, where your website gets added as an icon to the home screen and opens with no-browser Chrome (in fullscreen) with an app splash page. These are the minimum requirements for a PWA and, aside from providing a performance increase due to the offline caching, mainly give the illusion the website is an app. It's a psychological gap at its core where the end user will stop thinking of the browser as merely "websites" and instead look at it for what it actually is… an app platform. Google seemed to make this a priority to pave the way for developing the endless number of features, functionality, and UX/UI enhancements that actually provide an enhanced "app-like experience."

A PWA is really a collection of browser technologies and web development techniques and technologies that make a website more "app-like." I have broken these down into the following categories.

Enhanced app-like experience
  • Improved UX/UI experience on a mobile device
    • HTML/CSS/Javascript
  • Native device access and enhanced web capabilities
  • Speed and performance
What a PWA can be today beyond the definition

Here are more details on the three experience descriptions above.

UX/UI improvements

UX/UI and visual problem-solving are critical to making your website feel like an app. This often manifests as attention to details such as animations, input/font sizes, scrolling issues, or other CSS bugs. It's important that there is a strong frontend development team so they can create this UX. Within the category of design and UX are the enhancements we can implement with the building blocks of a web document (HTML/JSS/JS). Two examples of this are:

  • Hotwire Turbo: An open source framework using HTML over the wire to reload only the areas of your page that change using AJAX or WebSockets. This offers the performance improvements that SPAs strive for using only limited JavaScript. This approach is perfect for your monolithic application or template-rendering system; no need to invest in the added complexity of decoupling your front and back end.
  • Mobile-specific SPA frameworks: There are several decoupled frameworks out there that can give your website an app-like user experience. Onsen UI and Framework 7 are two excellent options that help you create a fast, responsive user interface for your website. However, you do not need to rely on these frameworks. As discussed above, a good frontend team can build the UI you strive for by implementing the latest app-like mobile design techniques.

This slide goes into more detail about staying current with HTML/CSS/JS in your PWA.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Web capabilities

The Chromium team is constantly improving the browser experience. You can track this progress in Project Fugu, the overarching web capabilities project. WebKit also continually strives to improve its browser experience and capabilities.

The Swift API can also interact with the WKWebView to enhance the native experience.

Google has a service called Bubblewrap, which works with Trusted Web Activity (TWA). All this does is wrap your PWA-enabled website in a native APK bundle so you can submit it to the app store. This is how the PWA builder link mentioned above works for Android. You can learn all about WKWebView and TWA in my previous article.

Speed and performance

There are countless ways to improve your app's performance. Check out the Google PageSpeed tools to start.

Benefits of using a PWA include the following:
  • Increased Lighthouse score and SEO.
  • A single codebase.
  • Frictionless testing.
  • Instant feedback loop for development cycles.
  • Use of managed PaaS web deployment workflows.
  • Web technologies are a skill set for a wide array of developers.
  • The only cross-platform development solution that delivers a full-fledged web experience.
  • Unlimited options to customize a design without relying on a cross-platform framework's limited UI components.
  • Reach users with limited (or no) internet connection.

There are some drawbacks/caveats to using a PWA, including:

  • Limited functionality: There is still an "app gap" with PWAs compared to native device access. However, browsers have been making great progress toward closing this. Learn more about Project Fugu's take on bridging the app gap from Thomas Steiner, and visit What web can do to see your browser's capabilities. When choosing your technology, there is a good chance your PWA project will be in the majority of apps that do not experience restrictions regarding capability/functionality.
  • Lack of standardization: Thomas Steiner's interview above discusses a "PWA standard," which is currently lacking. In my opinion, it is the reason for much of the confusion around the topic and developers' difficulty getting past that first "aha moment." This confusion has led to slower momentum in technology than there should be. Also, because of this lack of clarity, marketing or management may not even know to ask for a PWA because they don't understand what it is.
  • iOS App Store: App stores don't currently list PWAs, so they're harder to find than native apps. There are ways to do this. However, the key is to make your web app as good or a better experience than native. Do it right, and the Apple gods will smile upon you because the most important thing in reviews seems to be that you deliver a good mobile experience. Ionic, a framework utilizing WKWebView in native iOS apps before PWA was even a term, has some interesting insight in their forums. If you know what you are doing, this won't be a problem. You can see the "get your web app in the app stores" section of my previous Opensource.com article for more info.
  • Potential security issues in certain cases: The browser uses cookies as authentication. A tried and true browser method to maintain state since its inception, this may not fit your project's needs. The browser has excellent password management and is constantly evolving and implementing other authentication methods, such as Webauthn. The use of associated domains provides another layer of security.

I believe that compared to the alternatives, "the web is winning," and future progress will minimize these drawbacks as the web offers new capabilities. I don't think native development will disappear, but there will be more seamless integrations between WebView and native code.

Wrap up

While PWAs are still in their early stages of development, they have the potential to revolutionize the way we use the web. Every day I see a new website pushing the limits of what a PWA can be. Whether the management knows they are building a PWA or not, I often come across web apps and dev teams that surprise me with how they expand the use of web technologies or pass on a native app in lieu of a well-optimized mobile website.

While progressive web apps (PWAs) are still in their early stages of development, they have the potential to revolutionize the way we use the web.

Web development What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use open source commands in Powershell

Sat, 10/22/2022 - 15:00
Use open source commands in Powershell Alan Smithee Sat, 10/22/2022 - 03:00

When you launch an application on an operating system, there are certain code libraries and utility applications that your OS needs to use for that app to run. Your OS knows how to find these libraries and utilities because it has a system path, a map to common shared data that lots of apps need. Every OS has this, but users aren’t usually aware of it because they don’t usually need to care about it. However, when you start coding or using special network utilities or commands, you might care about your own PATH variable.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The PATH variable makes it so that you can save commands to a consistent location, and use them from anywhere on your system using the command prompt or the more powerful (and open source) Powershell.

For instance, say you want to install the open source application pscp.exe, a command-line interface to the famous PuTTY OpenSSH client on Windows. You can download it to your hard drive, but how does your command-line know that it exists? Well at first, it doesn’t:
 

PS> pscp
pscp: The term 'pscp' is not recognized as the name of a cmdlet, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

If you’re using an open source command line, such as Powershell or Cmder, you get a useful error hinting that this might be a problem with your path (or the lack thereof). Here’s how to solve that problem.

Setting a PATH
  1. First, create a folder called App on your Desktop.

  2. Next, right-click on the Windows menu in the bottom left corner of your screen, and select System.

Image by:

(Alan Smithee, CC BY-SA 4.0)

  1. In the System window that appears, click the link to Advanced system settings on the left of the window.

  2. In the System properties window that appears, click the Environment variables button at the bottom of the window.

Image by:

(Alan Smithee, CC BY-SA 4.0)

  1. In the Environment variables window, click the New button under the User variables panel.
Image by:

(Alan Smithee, CC BY-SA 4.0)

  1. In the dialog box that appears, enter PATH for the Variable name field, and %USERPROFILE\Desktop\App for the Variable value field. Click the OK button to save your changes.
Image by:

(Alan Smithee, CC BY-SA 4.0)

Place commands and applications you want to have access to from a command prompt in Desktop\Apps and Powershell, Cmder, and even Cmd will find them:
 

PS> pscp –version
pscp: Release 0.XY
Build platform: 64-bit x86 Windows
PS> Automatic PATH settings

Many applications get automatically added to the system path during installation. However, not all of them do, either because you missed a check box during the install process, or because the application developer expects you to add it yourself. When automatic paths fail, you now know how to forge your own path.

Set your path on Windows so you can use open source commands.

Image by:

Opensource.com

Command line Windows What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Observability-driven development with OpenTelemetry

Fri, 10/21/2022 - 15:00
Observability-driven development with OpenTelemetry Ken Hamric Fri, 10/21/2022 - 03:00

Observability-driven development (ODD) is being recognized as "necessary" for complex, microservice-based architectures. Charity Majors coined the term and has written about it in several articles, including Observability: A Manifesto. She explains the term in this quote:

Do you bake observability right into your code as you're writing it? The best engineers do a form of "observability-driven-development" — they understand their software as they write it, include instrumentation when they ship it, then check it regularly to make sure it looks as expected. You can't just tack this on after the fact, "when it's done".

OpenTelemetry provides the plumbing

The OpenTelemetry project has the industry backing to be the 'plumbing' for enabling observability across distributed applications. The OpenTelemetry project is second only to Kubernetes when measuring the size of its contributor community among Cloud Native Computing Foundation (CNCF) projects, and was formed when OpenTracing and OpenCensus projects merged in 2019. Since then, almost all of the major players in the industry have announced their support for OpenTelemetry.

OpenTelemetry covers three observability signals—logs, metrics, and distributed traces. It standardizes the approach to instrumenting your code, collecting the data, and exporting it to a backend system where the analyses can occur and the information can be stored. By standardizing the 'plumbing' to gather these metrics, you can now be assured that you don't have to change the instrumentation embedded in your code when switching from one vendor to another, or deciding to take the analysis and storage in-house with an open source solution such as OpenSearch. Vendors fully support OpenTelemetry as it removes the onerous task of enabling instrumentation across every programming language, every tool, every database, every message bus— and across each version of these languages. An open source approach with OpenTelemetry benefits all!

More on Microservices Microservices cheat sheet How to explain microservices to your CEO Free eBook: Microservices vs. service-oriented architecture Free online course: Developing cloud-native applications with microservices arc… Latest microservices articles Bridging the gap with Tracetest

So you want to do ODD, and you have a standard of how to instrument the code with OpenTelemetry. Now you just need a tool to bridge the gap and help you develop and test your distributed application with OpenTelemetry. This is why my team is building Tracetest, an open source tool to enable the development and testing of your distributed microservice application. It's agnostic to the development language used or the backend OpenTelemetry data source that is chosen.

For years, developers have utilized tools such as Postman, ReadyAPI, or Insomnia to trigger their code, view the response, and create tests against the response. Tracetest extends this old concept to support the modern, observability-driven development needs of teams. Traces are front and center in the tool. Tracetest empowers you to trigger your code to execute, view both the response from that code and the OpenTelemetry trace, and to build tests based on both the response and the data contained in the trace.

Image by:

(Ken Hamric, CC BY-SA 4.0)

Tracetest: Trigger, trace, and test

How does Tracetest work? First, you define a triggering transaction. This can be a REST or gRPC call. The tool executes this trigger and shows the developer the full response returned. This enables an interactive process of altering the underlying code and executing the trigger to check the response. Second, Tracetest integrates with your existing OpenTelemetry infrastructure to pull in the trace generated by the execution of the trigger, and shows you the full details of the trace. Spans, attributes, and timing are all visible. The developer can adjust their code and add manual instrumentation, re-execute the trigger, and see the results of their changes to the trace directly in the tool. Lastly, Tracetest allows you to build tests based on both the response of the transaction and the trace data in a technique known as trace-based testing.

[ Related read: What you need to know about automation testing in CI/CD ]

What is trace-based testing?

Trace-based testing is a new approach to an old problem. How do you enable integration tests to be written against complex systems? Typically, the old approach involved adding lots of complexity into your test so it had visibility into what was occurring in the system. The test would need a trigger, but it would also need to do extra work to access information contained throughout the system. It would need a database connection and authentication information, ability to monitor the message bus, and even additional instrumentation added to the code to enable the test. In contrast, Trace-based testing removes all the complexity. It can do this because of one simple fact—you have already fully instrumented your code with OpenTelemetry. By leveraging the data contained in the traces produced by the application under the test, Tracetest can make assertions against both the response data and the trace data. Examples of questions that can be asked include:

  • Did the response to the gRPC call have a 0 status code and was the response message correct?

  • Did both downstream microservices pull the message off the message queue?

  • When calling an external system as part of the process—does it return a status code of 200?

  • Did all my database queries execute in less than 250ms?

Image by:

(Ken Hamric, CC BY-SA 4.0)

By combining the ability to exercise your code, view the response and trace returned, and then build tests based on both sets of data, Tracetest provides a tool to enable you to do observability-driven development with OpenTelemetry.

Try Tracetest

If you're ready to get started, download Tracetest and try it out. It's open source, so you can contribute to the code and help shape the future of trace-based testing with Tracetest!

Tracetest is an open source tool to enable the development and testing of your distributed microservice application.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Microservices What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

4 open source editors I use for my writing

Thu, 10/20/2022 - 15:00
4 open source editors I use for my writing Alan Formy-Duval Thu, 10/20/2022 - 03:00

I've done a lot of writing throughout my career, mostly as an IT consultant creating product documentation as client deliverables. These documents generally provide instructions on installing various operating systems and software products.

Since 2018, I've contributed to opensource.com with articles about open source software. Of course, I use open source editors to write my pieces. Here are the four open source editors that I have used.

1. Vi

Vi, also referred to as Vim, is the first open source editor that I learned. This was the editor taught by my computer science classes and that I used for all of my C programming. I have used it as my de facto command line editor since the mid-1990s. There are so many iterations of this tool that I could write a whole series on them. Suffice it to say that I stick to its basic command line form with minimal customization for my daily use.

2. LibreOffice Writer

Writer is part of the open source LibreOffice office suite. It is a full-featured word processor maintained by The Document Foundation. It supports industry-standard formats such as the Open Document Format (ODF), Open XML, and MS Office DOC, DOCX. Learn more about Writer on its official site.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles 3. Ghostwriter

Ghostwriter is a text editor for Markdown. It has a nice real-time viewer and syntax guide or cheat sheet feature. Visit the official website to discover more.

4. Gedit

Gedit is the basic graphical editor found in many Linux distributions and is described as "a small and lightweight text editor for the GNOME desktop." I have begun using it lately to create articles in the Asciidoc format. The benefit of using Asciidoc is that the syntax is easily manageable and importable into web rendering systems such as Drupal. See the Gedit Wiki for many tips and tricks.

Editing text

An extensive list of editing software is available in the open source world. This list will likely grow as I continue writing. The primary goal for me is simplicity in formatting. I want my articles to be easy to import, convert, and publish in a web-focused platform.

Your writing style, feature needs, and target audience will guide you in determining your preferred tools.

In celebration of the National Council of Teachers of English NCTE National Day on Writing 2022, I thought I'd share a few of my favorite open source writing tools.

Image by:

Original photo by mshipp. Modified by Rikki Endsley. CC BY-SA 2.0.

Tools LibreOffice Text editors Vim What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Test your IoT platform with open source tools

Thu, 10/20/2022 - 15:00
Test your IoT platform with open source tools Chongyuan Yin Thu, 10/20/2022 - 03:00

The Internet of Things (IoT) and edge computing industries are developing quickly, and with it so does the scale of endpoints and the complexity of business logic. The more the IoT ecosystem grows, the more important it becomes to verify the availability and reliability of your platform. If you're delivering services, then testing your IoT system can help you find bottlenecks in performance, and help you plan ahead for scalability.

IoT can consist of any number of different devices using diverse protocols, strung together with complex integration architecture. This can make it challenging to design effective and meaningful tests for it. In this article, I demonstrate how to test an IoT platform using EMQX as an example of how to introduce performance test tools to verify and test platform-related quality indicators.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge The latest on edge EMQX

EMQX is a scalable messaging (MQTT) broker used to connect IoT devices. It's open source, but because it's a broker you must have a working node to manage all the messaging traffic. You can accept its business source license (BSL) and gain 10 licenses to use the official EMQX cloud installation. Alternately, you can install and run EMQX on your own server.

Introduction to JMeter

JMeter is an open source software of the Apache Foundation. It mainly implements performance tests by simulating concurrent loads, a common performance testing method in the open source community. It mainly has the following advantages:

  • Built-in support for multiple protocols, including TCP, HTTP, HTTPS, and more.
  • Provides a flexible plug-in extension mechanism and supports third-party extensions of other protocols.
  • Great community support.
Install JMeter

JMeter is written in Java, so you must install Java if it's not already installed. For Linux, macOS, and Windows, you can use Adoptium.net. On Linux, you may alternatively use SDKMan.

After installing Java, download JMeter, decompress it, and enter the bin subdirectory of the archive directory. Depending on your operating system, run jmeter (Linux and macOS) or jmeter.bat (Windows).

$ wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-X.Y.tgz $ tar xvf apache-jmeter*tgz $ cd apache-jmeter-X.Y/bin $ ./jmeter

JMeter's script editing interface is presented to you:

Image by:

(Chongyuan Yin, CC BY-SA 4.0)

Your first JMeter test

Here's how to use JMeter to build and run a simple HTTP test case.

  1. JMeter uses a single thread to simulate a user. A Thread Group refers to a virtual user group, and simulates access to the system being tested.

    To add a virtual user group (Thread Group), right-click on Test plan > Add > Threads (Users) > Thread Group.

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

    Number of Threads in Thread Properties can be used to configure the number of concurrent users in a virtual user group. The higher the value, the greater the amount of concurrency. Use Loop Count to configure how many tests each virtual user performs.

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

  2. JMeter includes several example tests. Add the HTTP Request test with a right-click on Thread Group > Add > Sampler > HTTP Request.

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

    In the sample test script, use the default HTTP request settings to initiate an HTTP request to a website.

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

  3. A result listener is not strictly necessary for the performance test, but it lets you see the test result. This can help facilitate debugging in the process of writing scripts. In this sample script, use View Result Tree to help view the response information of the request.

    To add a result listener, right-click on Thread group > Add > Listener > View Results Tree.

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

  4. Time to run the test. After saving your test script, click the Start button in the top toolbar to run the test script. Because you're testing against a public website, use a low number (under 10) of threads and loop count. If you spam the site, you could find yourself blocked in the future!

    Image by:

    (Chongyuan Yin, CC BY-SA 4.0)

Test your IoT platform

You've completed a simple HTTP test script. You can draw inferences from this case and try other protocols. In the next article, I'll introduce other test components of JMeter in more detail, which you can use together to build complex test scenarios. For now, explore JMeter to see what you can test.

This demo of JMeter using EMQX shows how to introduce performance test tools to verify and test platform-related quality indicators.

Image by:

opensource.com

Edge computing Internet of Things (IoT) What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Our open source startup journey

Wed, 10/19/2022 - 15:00
Our open source startup journey Navaneeth PK Wed, 10/19/2022 - 03:00

ToolJet is an open source, low-code framework for rapidly building and deploying internal tools. Our codebase is 100% JavaScript and TypeScript.

A lone developer in April 2021 started ToolJet. The public beta launched in June 2021 and was an instant hit. With this traction, ToolJet raised funding, and currently, we have a team of 20 members.

Why open source?

Before working on ToolJet, I worked with a few enterprise clients as a consultant. Many of these clients were large enough to build and maintain dozens of internal tools. Despite the constant requests from sales, support, and operations teams to add more features and fix the bugs in their internal tools, engineering teams struggled to find the bandwidth to work on the internal utilities.

I tried using a few platforms to build and maintain internal tools. Most of these tools were very expensive, and frequently, they didn't really fit the requirements. We needed modifications, and most utilities didn't support on-premise hosting.

As a Ruby developer, I primarily used ActiveAdmin and RailsAdmin to build internal tools. Both utilities are amazing, but making them work with more than one data source is difficult. I then realized there is a need in the market for a framework that could build user interfaces and connect to multiple data sources. I believe any tool built for developers should be open source. Most of the tools and frameworks that developers use daily result from people from all over the world collaborating in public.

The first commit

Building something like ToolJet needed a full-time commitment. Selling one of my side projects gave me a runway of 5-6 months, and I immediately started working on an idea I'd had in mind for at least two years.

The first commit (rails new) of ToolJet was on April 1, 2021.

Wait! I said the codebase is 100% JavaScript. Continue reading to discover why.

Building and pitching investors

I sat in front of my screens for most of April and May, coding and pitching to investors for a pre-seed round.

My work also included creating the drag-and-drop application builder, documenting everything, ensuring there was documentation for setting ToolJet up on popular platforms, creating a website, creating posters and blog posts for launch, and more. The process went well without any major challenges. At this point, the frontend of ToolJet was built using React, with the backend using Ruby on Rails.

While the coding was going well, investor pitches weren't going great. I sent around 40 cold emails to venture capitalist firms and "angel investors" focused on early-stage funding. While most of them ignored the email, some shared their reason for rejection, and some scheduled a call.

Most of the calls were the same; I couldn't convince them of an open source business model.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources The launch

June 7th was the day of the launch. First, we launched on ProductHunt. Six hours passed, and there were only 70 new signups. But we were trending as the #1 product of the day (and ended up as the #3 product of the week). For posterity, here's the original post.

I also posted on HackerNews around 6 PM, and within an hour, the post was #1. I was very happy that many visitors signed up and starred the repository. Many of these visitors and users reported bugs in the application and documentation. Within eight hours of posting on HN, more than 1,000 GitHub users starred ToolJet's GitHub repository, and there were hundreds of signups for ToolJet cloud. The trend continued for three days, and the repo had 2.4k stars.

Image by:

GitHub StarTrack for ToolJet. (Navaneeth PK, CC BY-SA 4.0)

Getting funding

The traction on GitHub was enough to be noticed by the venture capitalist (VC) world. The days following the launch were packed with calls. We had other options, but did not consider seriously consider them, including:

  • Bootstrapping: During the early stages of the product, it was hard to find paying customers, and I did not have enough savings to fund the project until that happened.
  • Building as a side project: While this strategy works great for smaller projects, I didn't feel it would work for ToolJet because we needed to create dozens of integrations and UI widgets before the platform could become useful for customers. As a side project, it might take months or years to achieve that.

I knew it could take months to build the platform I wanted if ToolJet became just a side project. I wanted to accelerate growth by expanding the team, and VC funding was the obvious choice, given the traction.

The good news is that we raised $1.55 million in funding within two weeks of the HN launch.

Stack matters in open source

Soon after the launch, we found that many people wanted to contribute to ToolJet, but they were mostly JavaScript developers. We also realized that for a framework like ToolJet that in the future should have hundreds of data source connectors, only a plugin-based architecture made sense. We decided to migrate from Ruby to TypeScript in August 2021. Even though this took about a month and significant effort, this was one of the best decisions we've made for the project. Today, we have an extensible plugin-based architecture powered by our plugin development kit. We have contributions from over 200 developers. We've written extensively about this migration here and here.

Launching v1.0

Many users have been using ToolJet on production environments since August, and the platform did not show any stability or scalability issues. We were waiting to wrap up the developer platform feature before we called it v1.0. The ToolJet developer platform allows any JavaScript developer to build and publish plugins for ToolJet. Developers are now able to make connectors for ToolJet. Creating a ToolJet connector can take just 30 minutes, including integration tests.

Building a growing community Image by:

ToolJet Star History (Navaneeth PK, CC BY-SA 4.0)

We didn't spend money on marketing. Most of our efforts in spreading the news about ToolJet have been writing about our learnings and being active in developer communities. We have a team of three members who take care of community queries.

The business model

ToolJet won't be a sustainable business without a commercial product to pay the bills. We've built an enterprise edition of ToolJet, for which customers must pay. There's no limit on usage for the free community edition, and additional features in the enterprise edition are relevant only to large teams. We have very large companies as paying customers right now, but we haven't started monetizing ToolJet aggressively. We have enough money left in the bank to build an even better ToolJet, so our focus currently is on product improvement.

What's next?

We frequently release better versions of ToolJet with the help of constant feedback and contributions from the open source community. Many major improvements and dozens of connectors and UI components are in progress. We're moving faster than ever towards our initial goal of being the open framework that can connect to hundreds of data sources and build even the most complicated user interfaces!

Here's how the open source project, ToolJet, achieved 13,000 stars and 200 contributors in a year's time.

Image by:

Greg Rakozy via Unsplash

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to contribute to Hacktoberfest 2022

Wed, 10/19/2022 - 15:00
How to contribute to Hacktoberfest 2022 Benny Ifeanyi … Wed, 10/19/2022 - 03:00

Hacktoberfest is a month-long celebration run by DigitalOcean to celebrate and give back to open source projects and software. The initiative is open to everyone, and the goal is to encourage everyone in our global community to contribute to open source. In this article, I'll answer frequently asked questions about how to participate. I'll also discuss how to contribute to both code and non-code issues.

Hacktoberfest started in 2013 with 700 participants, and since then, the initiative has grown. In 2021, DigitalOcean recorded over 141,000 participants with over 294,451 accepted pull requests.

Why should I participate in Hacktoberfest?

Everyone relies on open source projects today. This initiative is a way of giving back, thanking the maintainers and contributors of these projects, and celebrating these projects.

Besides, contributing to open source projects comes with many benefits, from real-world exposure and community recognition to learning how to collaborate while networking, upskilling, getting a tree planted in your name, and getting a Hacktoberfest t-shirt.

Yes, the first 40,000 participants (maintainers and contributors) who get at least four pull requests accepted before the deadline get a tree planted in their name or the Hacktoberfest 2022 t-shirt.

How do I sign up for Hacktoberfest?

Everyone is welcome regardless of experience or skill, whether it's your first or ninth time. To participate, head over to Hacktoberfest.com and start hacking. ("Hacking" in this context refers to hacking at code or any given task, and not breaking into somebody's computer.)

You can register anytime in October between September 26 and October 31.

It's a free event, and there's a lot of freedom in participating in the event. Of course, that also means there's responsibility. Like any community, Hacktoberfest has rules. You can get banned if you disobey these rules, such as contributing spammy pull requests or disrupting pull requests made by others. To learn more about the rules, check out the official website.

How can I contribute to Hacktoberfest?

Open source isn't just for developers and people who code. It's for everyone! Hacktoberfest has recently started accepting no-code and low-code contributions, so everyone is included.

You can contribute in various ways, including:

However, you should know that Hacktoberfest prioritizes quality over quantity.

How do I get started in Hacktoberfest?

I wrote a post a year back on how you can contribute to open source projects. It was the first time I heard about "open source" as anything but just a buzzword. Since then, I've contributed to NumPy and docToolchain docs.

All you need to get started in Hacktoberfest is a GitHub or GitLab account, a little knowledge of Git, the desire to contribute, and a repository looking for contributors.

Though Hacktoberfest accepts every form of contribution, each contribution must be made through a pull request to a public, unarchived repository and merged by the repository maintainer. This approach makes it easy for Hacktoberfest to track contributions. To do that, you must learn to use Git.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles How can I learn Git?

GitHub and GitLab are public code hosting services that use Git, an open source version control system that allows multiple people to contribute to a project simultaneously. You may find these articles helpful:

Find open source projects participating in Hacktoberfest

Some projects and maintainers get listed during the onboarding and participate in Hacktoberfest. These repositories get tagged with the "Hacktoberfest" label so contributors can easily find them.

To contribute, you need to find these labeled repositories and make some Hacktoberfest contributions.

Use GitHub/GitLab topics to find Hacktoberfest projects

Topics are a great place to get started:

Just search for "Hacktoberfest." You can filter the search results by language if you want to contribute to a project using a specific programming language.

Image by:

(Iheagwara Ifeany, CC BY-SA 4.0)

Try a search using GitHub search syntax. For example, searching using this syntax label:hacktoberfest is:issue is:open no:assignee on GitHub gives you a list of repositories labeled with "Hacktoberfest" with open issues that have not been assigned to anyone for resolution.

Image by:

(Iheagwara Ifeany, CC BY-SA 4.0)

Ruth Ikegah made a video a couple of days ago about using the GitHub search syntax.

Find non-code issues at Hacktoberfest

Try GitHub syntax by using is:design or is:documentation in your search. The result is a list of repositories labeled "Hacktoberfest" with open documentation or design issues that have not been assigned.

Contribute to Hacktoberfest as a technical writer
  • Looking for projects in need of a blog post? Use label:hacktoberfest is:issue is:open no:assignee is:blog
  • Would you rather write or translate documentation? Use label:hacktoberfest is:issue is:open no:assignee is:documentation
Contribute to Hacktoberfest as a designer
  • For UI issues, use label:hacktoberfest is:issue is:open no:assignee is:UI
  • For design issues, use label:hacktoberfest is:issue is:open no:assignee is:design
Start hacking at Hacktoberfest

Hacktoberfest is a great way to give back to the open source community. It's your chance to contribute and get involved. Be respectful when contributing, don't make spammy pull requests, and start hacking!

Participating in Hacktoberfest is a great way to get involved with the open source community wherever you are on your tech journey.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Community management Programming Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Exploring innovative Open Organization charts

Tue, 10/18/2022 - 15:00
Exploring innovative Open Organization charts Ron McFarland Tue, 10/18/2022 - 03:00

The ability to react quickly and adapt to changing situations is critical in today's business and work environment. In the past, offering efficient, standardized systems was the way to reduce costs and provide more to the public. In today's rapidly changing world, that's not enough. Collaborating, deciding, and executing quickly on a project requires that the traditional organization chart change to strengthen adaptability, transparency, collaboration, inclusivity, and project community—all five Open Organization Principles. Today, there are too many interdependencies to stick to the traditional top-down organization chart.

I just read the book Team of Teams, by Stanley McChrystal, which discusses this concern, particularly in military combat situations. It is the efficiency of small, empowered, trusted, goal-oriented teams working together (and with other teams) that will be successful in the future. Their ability to interact with other teams will make a small group scalable within a large organization. McChrystal writes that adaptability, transparency, and cross-silo collaboration are key to their success. These are three of the Open Organization Principles. I think it's equally valid in the business environment and not just in military operations.

Speed in decision-making and how to address continual unpredictability

When do you make a decision yourself, and when do you take decisions to top management? McChrystal states, "a 70% chance of success today is better than 90% tomorrow when speed of action is critical." These days, the competitors, or enemies, are moving at that pace.

In my article "The 4 components of a great decision, What makes a "good" open decision?" I wrote that decision-making speed was very important. Well, that's more true than ever, and you can't do that if you need approvals up and down a large vertical organization chart. Quick decisions must be made on the frontline, where the issues are. A horizontal organization that gets the people most directly involved in the decision-making process is required and is part of the strength that McChrystal is talking about.

Image by:

(Ron MacFarland, CC BY-SA 4.0)

These connections should have solid lines, and the vertical lines should be dotted, as communications should go up the line only when need be and horizontally minute by minute in real-time.

Information reversal

In another presentation, I talked about an upside-down organization chart, which I called the hierarchy of support. Compare this with a vertical organizational chart.

Image by:

(Ron MacFarland, CC BY-SA 4.0)

A typical vertical organization chart has top management in the top box. The staff provided frontline information to superiors so that they could decide.

Then, lines connect downward to departments under the top box, and directives move downward. Task directives flow from those department managers to the staff under them.

In a rapidly changing, unpredictable environment, the superiors should provide surrounding information to the staff so frontline people can make decisions independently. Imagine turning that organization chart upside down, with top management at the bottom and the staff at the top.

Image by:

(Ron MacFarland, CC BY-SA 4.0)

With today's information technology, the frontline staff is often better informed than their superiors. Therefore, managers' main job is to support the staff where needed, but the decisions should be made rapidly on the frontline.

McChrystal uses the expression, "Eyes on - Hands off." I think he is suggesting what I'm saying in a different way. He calls top managers giving directives "chess players" and supporting managers "gardeners."

McChrystal started a training company called Crosslead that trains individuals and companies on how this type of organization works. Their name implies the horizontal, frontline communication I mentioned in my upside-down organization chart.

The book mentions Open Organization Principles throughout:

  1. Adaptability, which he calls "resilience."
  2. Collaboration, which is horizontal within teams and between teams.
  3. Community, which is Inclusivity and Transparency within teams.
Getting through the forest by knowing the working environment

Imagine your goal is to get through a forest to your home on the other side. Unfortunately, the forest is rapidly changing because of the weather and climate.

One person gives you a map of the best way to go through the forest, but that map was made in the past and might be outdated. It might be the best way to get home, but blockages along that route may force you to return to your starting location.

Another person has a current satellite image of the forest which shows every possible route and its present condition. Furthermore, he has guides spread throughout the forest who can communicate and advise you on the best route.

Wouldn't the second method be more reliable with a rapidly changing forest?

McChrystal's organization chart

It starts with a frontline team, a specific goal, and members' specific tasks. The members select a leader depending on the task at hand. Who is most experienced, informed, and qualified to lead them toward the given team goal?

It might well be that the official leader is the least qualified to make decisions, so the system is very slow at best and freezes at worst. Who will most people follow? That will determine the leader of any given task.

McChrystal writes about the "Perry Principle," in which top management could not give orders by sea because there was no communication system in Admiral Perry's days. McChrystal calls this a "principle" because empowerment was given to frontline staff as a last resort and only when forced. He thinks this should be reversed. Top management should only make the decision themselves when the frontline people can't decide for one reason or another.

The team chart that McChrystal is proposing is on the right.

Image by:

Team of Teams, page 96.

An exponential growth in frontline connectedness speeds up the communication and action process in a way that the current hierarchical structure can not handle. The command chart on the left is just too slow in a rapidly changing environment.

By the time the situation is reported, everything changes and reported information is obsolete. Therefore, a frontline leader, like a start-up entrepreneur, must have the authority, initiative, intuition, and creative thinking to make decisions and immediately act on them to achieve the best result. Speed determines success or failure.

Up until now, adaptability has mostly been characteristic of small interactive teams rather than large top-down hierarchies.

In this new environment, that frontline leader's superior must withhold decision-making on the one hand but relentlessly support the frontline on the other. This will lead to frontline decision-making competence to iterate and adjust in a fraction of the normal time.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization? Attention directed from efficiency to adaptability

McChrystal introduces the work of Frederick Winslow Taylor, who developed the reductionist theory and the optimization and standardization of processes. This process was the most efficient way to reduce costs and save energy. He believed there was one ideal way for any process.

So, he researched processes, developed instruction sheets, and instructed the frontline staff just to follow directions. It was a hard and fast line between thinking (by the researcher) and action (by the frontline worker). This approach is fine for repeated, well-known, stable processes, but not in changing environments, like factories with complicated but linear predictable activities, but not in changing environments. Unfortunately, this concept took the initiative to improve away from the frontline operator, as all they had to do was act and not think.

When modification was required, the frontline worker froze, unqualified and unskilled at adapting.

McChrystal writes that his military combat environment is not predictable. It is a "complex system." This complexity has countless unpredictable interdependencies in the environment.

When one event takes place, it may have a massive impact or no impact at all. This results in great unpredictability. We have to manage this unpredictability. In the past, communication was from a few to a few with some connected impact. Now, it is many to many, and no one knows who or what the impact is on who or what. It is totally unpredictable.

I believe this reductionist process is still important, but it can only go so far.

Therefore, those basic practice instruction sheets should come in the form of suggestions only and not orders to follow. McChrystal calls these situations complexity systems. It's like opening and walking through a door only to learn of other doors to choose from.

Those other doors cannot be foreseen without walking through the previous door. After selecting one of those doors, you discover more doors to choose from. To be most effective, whenever you select a door, you let everyone in the system know which one you picked and ask for advice if available. This is where real-time transparency is vital. In this environment, planning is not helpful, but feedback is.

Being better equipped and more efficient are not enough in complex environments. Being agile and resilient become critical factors. When disturbances come, the system must continue to function and adjust. This is all-important in a world of continual situational change. In this world, planning for disruption is vital. It is "rolling with the punches" or even benefiting from them by developing an immune system to disruption. When shocks come, options have been planned, developed, practiced, and applied when needed. Simply working on one ideal process is not enough. If all the attention is on the execution of one procedure, other more helpful skills may suffer. It is moving away from predicting a single forecast, exploring all possibilities, and preparing for them. McChrystal asks to contrast efficiency and effectiveness. He says, "Efficiency is doing things right. Effectiveness is doing the right thing." He thinks the latter is more important in a complex situation. To do that, people should be skilled in many rarely needed but still necessary tasks.

Collaboration over vertical communication walls

Furthermore, breaching vertical walls between divisions or teams increases the speed of action, particularly where cross-functional collaboration is vital to the speed of response.

According to McChrystal, both between teams and within teams, collective consciousness is developed over years of joint practice, trust building, cooperation, deep group, and individual understanding, bonding, and service to their greater purpose.

The entire group can improvise in a coordinated way when necessary. Teamwork is a process of reevaluating everyone's move and intent, constant messaging, and real-time adjustment.

Barriers between teams

As you move down the traditional organization chart, motivation and contextual awareness become more limited and specific, and greater distance from the overall organization's objectives. Members are tight within their team but separated from the other groups within the organization and possibly the entire organization's goals.

Image by:

Team of Teams, page 129

Real-time communication and connections between teams

In a complex, rapidly changing environment, the below chart is more appropriate, where there is a good deal of continual information flow and connections.

Image by:

Team of Teams, page 129

Team members tackling complex environments must all grasp not just their team's purpose but the overarching goal of the entire organizational system. They must also consider how their activities impact other groups.

To be successful, team participation and team-to-team participation are vital, according to McChrystal. In Jim Whitehurst's book on letting and encouraging everyone to speak up in meetings, even the quiet people express this same point.

I wrote about it in my first article, When empowering employee decision-making, intent is everything, posted on April 19, 2016. This concept is true when trying to connect teams as well.

Teams working on a problem and collaborating in real time can perform tasks more concurrently rather than sequentially, saving a massive amount of valuable time.

Wrap up

This article presents several images of new organization chart concepts. Unofficially, to get things done, much horizontal communication has been going on for decades. The difference now is that updates are in minutes and not at weekly or monthly meetings.

I also discussed the importance of the speed of decision-making in today's working environment and that a new real-time communication flow system is needed. I mentioned that at least three critical Organization Principles, namely adaptability, transparency, and collaboration, were vitally important to make communication flow and allow faster decision-making and execution. Furthermore, I also presented that just having a highly efficient and low-cost system is not enough when faced with a rapidly changing, unpredictable working environment. An approach better able to adapt to change needs to be introduced and put into use, namely a new open organization chart.

In the second part of this article, I will discuss how this type of organization can work, including how to develop it and improve it. Also, I'll give examples of how it can work in various situations.

Flip the traditional organizational chart upside-down and get a glimpse of the future of work.

Image by:

Opensource.com

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open source DevOps tools in a platform future

Mon, 10/17/2022 - 15:00
Open source DevOps tools in a platform future Will Kelly Mon, 10/17/2022 - 03:00

The open source roots of DevOps tools are undeniable, even with a prediction that the global DevOps market will reach $17.8 billion by 2026. The changing world of work, security, and compliance concerns, along with venture capital firms, are pushing the market to DevOps platforms where development teams can access a complete end-to-end DevOps toolchain in the cloud.

The current state of open source DevOps tools

Let's get one thing straight: There's no way open source tools will disappear from the DevOps world. Right now, there's a balance between open source and vendor DevOps tools, with developers using what works for them. Indeed, there are plenty of cases when a development team chooses an open source tool for their DevOps pipeline only to upgrade later to a commercial version.

3 examples of open source DevOps tools

Here are some examples of open source DevOps tools with a commercial business built around them.

Git

Git–the source code management tool–is probably one of the main foundations for DevOps toolchains serving as a source code repository.

The two best commercial examples of Git are GitLab and GitHub. GitLab accepts contributions to its open source project. GitHub is embarking on an effort to become a DevOps platform as well with the launch of GitHub Copilot–an AI pair programmer–launching to mixed reviews and criticism from some open source groups.

Jenkins

An open source automation server, Jenkins is prized for its easy installation, configuration, and extensibility.

CloudBees offers JenkinsX, an open-source solution that provides automated continuous integration and continuous delivery (CI/CD) and automated testing tools for cloud-native applications on Kubernetes. They also provide commercial support for JenkinsX, including:

  • Access to CloudBees technical expertise
  • 24x7 technical support
  • Access to CloudBees documentation and online knowledge base
Kubernetes

The growth of Kubernetes is undeniable as more organizations seek an enterprise-grade container orchestration solution. Despite criticisms about its complexity, Kubernetes

There's an entire burgeoning industry around Kubernetes, and with good reason. According to Allied Market Research, the global container and Kubernetes security market was valued at $714 million in 2020 and is projected to reach $8242 million by 2030.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles DevOps toolchains today

There are still plenty of build-your-own (BYO) CI/CD toolchains in play across industries. The open source projects powering DevOps functions are still prospering,

BYO toolchains are integration-ready and very extensible, which has always been a strength for organizations continuing to iterate on their DevOps practices. The lack of a standard bill of materials might prove troublesome in enterprises seeking standardization for business, IT, and security reasons.

While the advent of DevOps platforms isn't going unnoticed, many organizations migrated their CI/CD toolchains to the public cloud well before the pandemic. The security of the toolchain itself has long been a rising concern, and public cloud infrastructure provides Identity Access Management (IAM) and other security features to control access.

DevOps platforms: Friend or foe?

A DevOps platform is an end-to-end solution that places all functions of the CI/CD toolchain into the cloud. Examples of DevOps platforms include GitLab and Harness. GitHub is also making moves to become a DevOps platform in its own right.

Advantages (even if only in the eyes of enterprise buyers)

DevOps platforms are attractive to enterprise buyers who are already comfortable with the consumption-based and subscription-based pricing of the SaaS and cloud industries. Concerns about maintenance, security, compliance, and developer productivity are certainly at the top of mind for technology leaders in this remote and hybrid work world. Standardizing on a DevOps platform becomes an appealing story to these people.

Disadvantages

Age-old concerns about vendor lock-in come to mind when depending on a vendor for a DevOps toolchain. The extensibility of development teams building and maintaining their toolchains isn't going to be quite the experience as it was when they made their toolchains from scratch, much less bringing in new tools to improve their workflows.

There are also potential economic disadvantages with DevOps platform providers. Think what might happen to an overvalued DevOps tools startup that doesn't meet its investors' lofty financial goals. Likewise, there could be smaller startup vendors that may not receive their next round of funding and fade away into irrelevance.

While the advent of a DevOps platform makes sense in many ways, it does work against the open source ethos that has helped build the DevOps tools we use today.

DevOps tools: An inflection point

Security and compliance concerns for DevOps toolchains continue to mount as working models change. It's only natural.

The changing world of work

How we work affects DevOps teams just like the rest of the enterprise. Remote and hybrid DevOps teams require secure toolchains. Changing collaboration and reporting requirements across the pipelines are also growing necessities, such as asynchronous work and executives demanding a return to the office.

Software supply chain security market

The software supply chain security market draws much attention after high-profile attacks and the federal government response. No organization has yet to blame open source for a software supply chain attack, but we're going to see an extension of DevOps/DevSecOps practices and tools to combat this threat. When it's all said and done, though, DevOps/DevSecOps tools and practices will outlast some startups that pivoted to the trend.

Final thoughts

It's far from game over for OSS projects in the DevOps space, but DevOps stakeholders have a right to start asking questions about the toolchains of the future. However, OSS DevOps projects do need to consider their future, especially in light of growing security and compliance concerns that directly impact pipelines.

There's a future of coopetition where the DevOps platform providers donate time, money, and resources to the open source tools that serve as a foundation for their platforms. An interesting example of a potential future is OpsVerse, which offers a DevOps platform with open source tools they manage for their customers.

Then again, there's also a future where the open source DevOps tools projects continue to prosper and innovate as more enterprise-built toolchains migrate to the cloud in more significant numbers.

[ Kickstart an organizational culture change. Read the first article in a series, DevSecOps: 5 tips for seeding a culture transformation ]

While the commercial DevOps tools market looks to platforms, it's time for open source DevOps tools to redefine their future.

Image by:

Opensource.com

DevOps Kubernetes CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why you should consider Rexx for scripting

Mon, 10/17/2022 - 15:00
Why you should consider Rexx for scripting Howard Fosdick Mon, 10/17/2022 - 03:00

How do you design a programming language to be powerful yet still easy to use? Rexx offers one example. This article describes how Rexx reconciles these two seemingly contradictory goals. 

History of Rexx programming language

Several decades ago, computers were shifting from batch to interactive processing. Developers required a scripting or "glue" language to tie systems together. The tool needed to do everything from supporting application development to issuing operating system commands to functioning as a macro language.

Mike Cowlishaw, IBM Fellow, created a solution in a language he named Rexx. It is widely considered the first general-purpose scripting language.

Rexx was so easy to use and powerful that it quickly permeated all of IBM's software. Today, Rexx is the bundled scripting language on all of IBM's commercial operating systems (z/OS, z/VM, z/VSE, and IBM i). It's no surprise that in the 1990s, IBM bundled Rexx with PC-DOS and then OS/2. Rexx popped up in Windows in the XP Resource Kit (before Microsoft decided to lock in customers with its proprietary scripting languages, VBScript and PowerShell). Rexx also emerged as the scripting language for the popular Amiga PC.

Open source Rexx

With Rexx spreading across platforms, standardization was needed. The American National Standards Institute (ANSI) stepped forward in 1996.

That opened the floodgates. Open source Rexx interpreters started appearing. Today, more than a half dozen interpreters run on every imaginable platform and operating system, along with many open source tools.

Two Rexx variants deserve mention. Open Object Rexx is a compatible superset of procedural or "classic" Rexx. ooRexx is message-based and provides all the classes, objects, and methods one could hope for. For example, it supports multiple inheritance and mixin classes.

Paralleling the rise in Java's popularity, Mike Cowlishaw invented NetRexx. NetRexx is a Rexx variant that fully integrates with everything Java (including its object model) and runs on the Java virtual machine.

ooRexx went open source in 2004; NetRexx in 2011. Today the Rexx Language Association enhances and supports both products. The RexxLA also supports Regina, the most popular classic Rexx interpreter, and BSF4ooRexx, a tool that fully integrates ooRexx with Java. Everything Rexx is open source.

Layered design

So, back to the initial conundrum. How does a programming language combine power with ease of use?

One part of the solution is a layered architecture. Operators and a minimal set of instructions form the core of the classic Rexx language:

Image by:

(Howard Fosdick, CC BY-SA 4.0)

Surrounding the core are the language's 70-odd built-in functions:

  • Arithmetic
  • Comparison
  • Conversion
  • Formatting
  • String manipulation
  • Miscellaneous

Additional power is added in the form of external function libraries. You can invoke external functions from within Rexx programs as if they were built in. Simply make them accessible by proper reference at the top of your script.

Function libraries are available for everything: GUIs, databases, web services, OS services, system commands, graphics, access methods, advanced math, display control, and more. The result is a highly-capable open source ecosystem.

Finally, recall that Open Object Rexx is a superset of classic Rexx. So you could use procedural Rexx and then transition your skills and code to object programming by moving to ooRexx. In a sense, ooRexx is yet another Rexx extension, this time into object-oriented programming.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Rexx is human-oriented language

Rexx glues all its instructions, functions, and external libraries together in a consistent, dead-simple syntax. It doesn't rely on special characters, arcane syntax, or reserved words. It's case-insensitive and free-form.

This approach shifts the burden of programming from programmer to machine to the greatest degree possible. The result is a comparatively easy language to learn, code, remember, and maintain. Rexx is intended as a human-oriented language.

Rexx implements the principle of least astonishment, the idea that systems should work in ways that people assume or expect. For example, Rexx's default decimal arithmetic—with precision you control—means you aren't surprised by rounding errors.

Another example: All variables contain strings. If the strings represent valid numbers, one can perform arithmetic operations with them. This simple concept of dynamic typing makes all data visible and simplifies tracing and debugging.

Rexx capitalizes on the advantages of interpreters to simplify program development. Tracing facilities allow developers to direct and witness program execution in various ways. For example, one can single-step through code, inspect variable values, change them during execution, and more.

Rexx also raises common error conditions that the programmer can easily trap. This feature makes for more standardized, reliable code.

Arrays

Rexx's approach to arrays (or tables) is a good example of how it combines simplicity with power.

Like all Rexx variables, you don't have to declare them in advance. They automatically expand to the size of available memory. This feature relieves programmers of the burden of memory management.

To form an array, a so-called compound variable stitches together a stem variable with one or more subscripts, as in these examples:

my_array.1
my_table.i.j
my_list.index_value
my_list.string_value
my_tree.branch_one
my_tree.branch_one.branch_two

Subscripts can represent numeric values, as you may be accustomed to in standard table processing.

Alternatively, they can contain strings. String subscripts allow you to build associative arrays using the same simple syntax as common tables. Some refer to associative arrays as key-value pairs or content addressable memory. Allowing array contents to be accessed by arbitrary strings rather than simply numeric values opens up an entirely new world of algorithmic solutions.

With this flexible but consistent syntax, you can build almost any data structure: Lists, two- or three- or n-dimensional tables, key-value pairs, balanced trees, unbalanced trees, dense tables, sparse tables, records, rows, and more.

The beauty is in simplicity. It's all based on the notion of compound variables.

Wrap up

In the future, I'll walk through some Rexx program examples. One real-world example will show how a short script using associative arrays reduced the runtime of a legacy program from several hours down to less than a minute.

You can join the Rexx Language Association for free. For free Rexx downloads, tools, tutorials, and more, visit RexxInfo.org.

Rexx is arguably the first general-purpose scripting language. It's powerful yet easy to use.

Image by:

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What’s new in GNOME 43?

Sun, 10/16/2022 - 15:00
What’s new in GNOME 43? Jim Hall Sun, 10/16/2022 - 03:00

I love the GNOME desktop, and I use it as my daily Linux desktop environment. I find with GNOME, I can focus on the stuff I need to get done, but I still have flexibility to make the desktop look and act the way I want.

The GNOME Project recently released GNOME 43, the latest version of the GNOME desktop. I met with GNOME developer Emmanuele Bassi to ask a few questions about this latest release:

Jim Hall (Jim): GNOME has lots of great desktop features. What are some of the new features in GNOME 43?

Emmanuele Bassi (Emmanuele): GNOME 43 has a complete redesign of the system status menu in the Shell. The new design is meant to give quick and easy access to various settings: network connections and VPNs; audio input and output sources and volumes; toggling between light and dark styles. It also has a shortcut for taking a screenshot or starting a screen recording.

GNOME core applications have also been ported to the new major version of the GNOME toolkit, GTK4. GTK4 is more efficient when it comes to its rendering pipeline, which leads to smoother transitions and animations. Additionally, GNOME applications use libadwaita, which provides new UI elements and adaptive layouts that can seamlessly scale between desktop and mobile form factors.

The GNOME file manager, Nautilus, is one of the applications that has been ported over to GTK4 and libadwaita, and it has benefitted from the new features in the core platform; it’s now faster, and it adapts its UI when the window is resized.

The system settings can now show device security information, including manufacturing errors and hardware misconfiguration, as well as possible security issues like device tampering. Lots of work is planned for future releases, as device security is an area of growing concern.

Jim: What do you love most about GNOME 43?

Emmanuele: The most important feature of GNOME, one that I constantly take advantage of and that I always miss when I have to deal with other operating systems is how much the OS does not get in the way of what I’m doing. Everything is designed to let me concentrate on my job, without interruptions. I don’t have bells and whistles constantly on my screen, competing for attention. Everything is neatly tucked away, ready to be used only when I need to.

Jim: Many folks are familiar with GNOME today, but may not be familiar with its history. How did GNOME get started?

Emmanuele: GNOME started in 1997, 25 years ago, as a project for using existing free and open source components to create a desktop environment for everyone that would be respectful of users’ and developers’ freedom. At the time there were only commercial desktops for Unix, or desktops that were based on non-free components. Being able to take the entire desktop, learn from it, and redistribute it has always been a powerful motivator for contributors—even commercial ones.

Over the past 25 years, GNOME contributors have worked not just on making the desktop, but creating a platform capable of developing and distributing applications.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

Jim: Open source projects keep going because of a strong community. What keeps the GNOME community strong?

Emmanuele: I don’t pretend to speak for everyone in the project, but for myself I think the main component is the respect of every voice within the community of contributors, which comes from the shared vision of creating an entirely free and open platform. We all know where we want to go, and we are all working towards the same goal. Sometimes, we may end up pulling in different directions, which is why donating to entities like the GNOME Foundation, which sponsor gatherings and conferences, is crucial: they allow a more comprehensive communication between all the involved parties, and at the end we get better results for it.

GNOME also takes very seriously respectful communication between members of the community; we have a strong code of conduct, which is enforced within the community itself and covers all venues of communication, including in person events.

Jim: GNOME established the Human Interface Guidelines (HIG) to unify the GNOME design and GNOME app interfaces. How did the HIG come about?

Emmanuele: The Human Interface Guidelines (HIG) came into being after Sun did a usability study on GNOME 1, one of the very first usability studies for a free software project. The findings from that study led to the creation of a standardized document that projects under the GNOME umbrella would have to follow, which is how we ended up with GNOME 2, back in 2002.

The HIG was a rallying point and a symbol, a way to demonstrate that the entire project cared about usability and accessibility, and it provided the tools to both desktop and application developers to create a consistent user experience.

Over the years, the HIG moved away from being a complete checklist of pixels of padding and grids of components, and instead it now provides design principles, UI patterns, conventions, and resources for contributors and application developers. The HIG now has its own implementation library, called libadwaita, which application developers can use when targeting GNOME, and immediately benefit from a deeper integration within the platform without having to re-implement the various styles and patterns manually.

Thanks to Emmanuele Bassi for answering this interview. You can find GNOME at https://www.gnome.org/

Read the release announcement for GNOME 43 at https://release.gnome.org/43/

Learn about what’s new in GNOME 43 for developers at https://release.gnome.org/43/developers/

I got a glimpse into the popular Linux desktop's latest version by checking in with GNOME developer, Emmanuele Bassi.

Image by:

Gunnar Wortmann via Pixabay. Modified by Opensource.com. CC BY-SA 4.0.

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Can Kubernetes help solve automation challenges?

Fri, 10/14/2022 - 15:00
Can Kubernetes help solve automation challenges? Rom Adams Fri, 10/14/2022 - 03:00

I started my automation journey when I adopted Gentoo Linux as my primary operating system in 2002. Twenty years later, automation is not yet a done deal. When I meet with customers and partners, they share automation wins within teams, but they also describe the challenges to achieving similar success at an organizational level.

Most IT organizations have the ability to provision a virtual machine end to end, reducing what used to be a four-week lead time to just five minutes. That level of automation is itself a complex workflow, requiring networking (IP address management, DNS, proxy, networking zones, and so on), identity access management, hypervisor, storage, backup, updating the operating system, applying the latest configuration files, monitoring, security and hardening, and compliance benchmarking. Wow!

It's not easy to address the business need for high velocity, scaling, and on-demand automation. For instance, consider the classic webshop or an online government service to file tax returns. The workload has well-defined peaks that need to be absorbed.

A common approach for handling such a load is having an oversized server farm, ready to be used by a specialized team of IT professionals, monitoring the seasonal influx of customers or citizens. Everybody wants a just-in-time deployment of an entire stack. They want infrastructure running workloads within the context of a hybrid cloud scenario, using the model of "build-consume-trash" to optimize costs while benefiting from infinite elasticity.

In other words, everybody wants the utopian "cloud experience."

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Can the cloud really deliver?

All is not lost, thanks mainly to the way Kubernetes has been designed. The exponential adoption of Kubernetes fuels innovation, displacing standard legacy practices for managing platforms and applications. Kubernetes requires the use of Everything-as-Code (EaC) to define the desired state of all resources, from simple compute nodes to TLS certificates. Kubernetes compels the use of three major design constructs:

  • A standard interface to reduce integration friction between internal and external components
  • An API-first and API-only approach to standardize the CRUD (Create, Read, Update, Delete) operations of all its components
  • Use of YAML as a common language to define all desired states of these components in a simple and readable way

These three key components are essentially the same requirements for choosing an automation platform, at least if you want to ease adoption by cross-functional teams. This also blurs the separation of duties between teams, helping to improve collaboration across silos, which is a good thing!

As a matter of fact, customers and partners adopting Kubernetes are ramping up to a state of hyper-automation. Kubernetes organically drives teams to adopt multiple DevOps foundations and practices—like EaC, version control with Git, peer reviews, documentation as code—and encourages cross-functional collaboration. These practices help mature a team's automation skills, and they help a team get a good start in GitOps and CI/CD pipelines dealing with both application lifecycle and infrastructure.

Making automation a reality

You read that right! The entire stack for complex systems like a webshop or government reporting can be defined in clear, understandable, universal terms that can be executed on any on-prem or cloud provider. An autoscaler with custom metrics can be defined to trigger a just-in-time deployment of your desired stack to address the influx of customers or citizens during seasonal peaks. When metrics are back to normal, and cloud compute resources don't have a reason to exist anymore, you trash them and return to regular operations, with a set of core assets on-prem taking over the business until the next surge.

The chicken and the egg paradox

Considering Kubernetes and cloud-native patterns, automation is a must. But it raises an important question: Can an organization adopt Kubernetes before addressing the automation strategy?

It might seem that starting with Kubernetes could inspire better automation, but that's not a foregone conclusion. A tool is not an answer to the problem of skills, practices, and culture. However, a well-designed platform can be a catalyst for learning, change, and cross-functional collaboration within an IT organization.

Get started with Kubernetes

Even if you feel you missed the automation train, don't be afraid to start with Kubernetes on an easy, uncomplicated stack. Embrace the simplicity of this fantastic orchestrator and iterate with more complex needs once you've mastered the initial steps.

Automation at the organization level has been an elusive goal, but Kubernetes might be able to change all that.

Image by:

opensource.com

Kubernetes Cloud Automation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What you need to know about compiling code

Thu, 10/13/2022 - 15:00
What you need to know about compiling code Alan Smithee Thu, 10/13/2022 - 03:00

Source code must be compiled in order to run, and in open source software everyone has access to source code. Whether you've written code yourself and you want to compile and run it, or whether you've downloaded somebody's project to try it out, it's useful to know how to process source code through a compiler, and also what exactly a compiler does with all that code.

Build a better mousetrap

We don't usually think of a mousetrap as a computer, but believe it or not, it does share some similarities with the CPU running the device you're reading this article on. The classic (non-cat) mousetrap has two states: it's either set or released. You might consider that on (the kill bar is set and stores potential energy) and off (the kill bar has been triggered.) In a sense, a mousetrap is a computer that calculates the presence of a mouse. You might imagine this code, in an imaginary language, describing the process:

if mousetrap == 0 then There's a mouse! else There's no mouse yet. end

In other words, you can derive mouse data based on the state of a mousetrap. The mousetrap isn't foolproof, of course. There could be a mouse next to the mousetrap, and the mousetrap would still be registered as on because the mouse has not yet triggered the trap. So the program could use a few enhancements, but that's pretty typical.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Switches

A mousetrap is ultimately a switch. You probably use a switch to turn on the lights in your house. A lot of information is stored in these mechanisms. For instance, people often assume that you're at home when the lights are on.

You could program actions based on the activity of lights on in your neighborhood. If all lights are out, then turn down your loud music because people have probably gone to bed.

A CPU uses the same logic, multiplied by several orders of measure, and shrunken to a microscopic level. When a CPU receives an electrical signal at a specific register, then some other register can be tripped, and then another, and so on. If those registers are made to be meaningful, then there's communication happening. Maybe a chip somewhere on the same motherboard becomes active, or an LED lights up, or a pixel on a screen changes color.

[ Related read 6 Python interpreters to try in 2022 ]

What comes around goes around. If you really want to detect a rodent in more places than the one spot you happen to have a mousetrap set, you could program an application to do just that. With a webcam and some rudimentary image recognition software, you could establish a baseline of what an empty kitchen looks like and then scan for changes. When a mouse enters the kitchen, there's a shift in the pixel values where there was previously no mouse. Log the data, or better yet trigger a drone that focuses in on the mouse, captures it, and moves it outside. You've built a better mousetrap through the magic of on and off signals.

Compilers

A code compiler translates human-readable code into a machine language that speaks directly to the CPU. It's a complex process because CPUs are legitimately complex (even more complex than a mousetrap), but also because the process is more flexible than it strictly "needs" to be. Not all compilers are flexible. There are some compilers that have exactly one target, and they only accept code files in a specific layout, and so the process is relatively straight-forward.

Luckily, modern general-purpose compilers aren't simple. They allow you to write code in a variety of languages, and they let you link libraries in different ways, and they can target several different architectures. The GNU C Compiler (GCC) has over 50 lines of options in its --help output, and the LLVM clang compiler has over 1000 lines in its --help output. The GCC manual contains over 100,000 words.

You have lots of options when you compile code.

Of course, most people don't need to know all the possible options. There are sections in the GCC man page I've never read, because they're for Objective-C or Fortran or chip architectures I've never even heard of. But I value the ability to compile code for several different architectures, for 64-bit and 32-bit, and to run open source software on computers the rest of the industry has left behind.

The compilation lifecycle

Just as importantly, there's real power to understanding the different stages of compiling code. Here's the lifecycle of a simple C program:

  1. C source with macros (.c) is preprocessed with cpp to render an .i file.

  2. C source code with expanded macros (.i) is translated with gcc to render an .s file.

  3. A text file in Assembly language (.s) is assembled with as into an .o file.

  4. Binary object code with instructions for the CPU, and with offsets not tied to memory areas relative to other object files and libraries (*.o) is linked with ld to produce an executable.

  5. The final binary file either has all required objects within it, or it's set to load linked dynamic libraries (*.so files).

And here's a simple demonstration you can try (with some adjustment for library paths):

$ cat << EOF >> hello.c
 #include
 int main(void)
 { printf("hello world\n");
   return 0; }
   EOF
$ cpp hello.c > hello.i
$ gcc -S hello.i
$ as -o hello.o hello.s
$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/5.5.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o hello.o \
/usr/lib64/crtn.o  --start-group -lc -lgcc \
-lgcc_eh --end-group
$ ./hello
hello worldAttainable knowledge

Computers have become amazingly powerful, and pleasantly user-friendly. Don't let that fool you into believing either of the two possible extremes: computers aren't as simple as mousetraps and light switches, but they also aren't beyond comprehension. You can learn about compiling code, about how to link, and compile for a different architecture. Once you know that, you can debug your code better. You can understand the code you download. You may even fix a bug or two. Or, in theory, you could build a better mousetrap. Or a CPU out of mousetraps. It's up to you.

Download our new eBook: An open source developer's guide to building applications

Use this handy mousetrap analogy to understand compiling code. Then download our new eBook, An open source developer's guide to building applications.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Linux Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Asynchronous programming in Rust

Thu, 10/13/2022 - 15:00
Asynchronous programming in Rust Stephan Avenwedde Thu, 10/13/2022 - 03:00

Asynchronous programming: Incredibly useful but difficult to learn. You can't avoid async programming to create a fast and reactive application. Applications with a high amount of file or network I/O or with a GUI that should always be reactive benefit tremendously from async programming. Tasks can be executed in the background while the user still makes inputs. Async programming is possible in many languages, each with different styles and syntax. Rust is no exception. In Rust, this feature is called async-await.

While async-await has been an integral part of Rust since version 1.39.0, most applications depend on community crates. In Rust, except for a larger binary, async-await comes with zero costs. This article gives you an insight into asynchronous programming in Rust.

Under the hood

To get a basic understanding of async-await in Rust, you literally start in the middle.

The center of async-await is the future trait, which declares the method poll (I cover this in more detail below). If a value can be computed asynchronously, the related type should implement the future trait. The poll method is called repeatedly until the final value is available.

At this point, you could repeatedly call the poll method from your synchronous application manually in order to get the final value. However, since I'm talking about asynchronous programming, you can hand over this task to another component: the runtime. So before you can make use of the async syntax, a runtime must be present. I use the runtime from the tokio community crate in the following examples.

A handy way of making the tokio runtime available is to use the #[tokio::main] macro on your main function:

#[tokio::main]
async fn main(){
    println!("Start!");
    sleep(Duration::from_secs(1)).await;
    println!("End after 1 second");
}

When the runtime is available, you can now await futures. Awaiting means that further executions stop here as long as the future needs to be completed. The await method causes the runtime to invoke the poll method, which will drive the future to completion.

In the above example, the tokios sleep function returns a future that finishes when the specified duration has passed. By awaiting this future, the related poll method is repeatedly called until the future completes. Furthermore, the main() function also returns a future because of the async keyword before the fn.

So if you see a function marked with async:

async fn foo() -> usize { /**/ }

Then it is just syntactic sugar for:

fn foo() -> impl Future<Output = usize> { async { /**/ } }Pinning and boxing

To remove some of the shrouds and clouds of async-await in Rust, you must understand pinning and boxing.

If you are dealing with async-await, you will relatively quickly step over the terms boxing and pinning. Since I find that the available explanations on the subject are rather difficult to understand, I have set myself the goal of explaining the issue more easily.

Sometimes it is necessary to have objects that are guaranteed not to be moved in memory. This comes into effect when you have a self-referential type:

struct MustBePinned {
    a: int16,
    b: &int16
}

If member b is a reference (pointer) to member a of the same instance, then reference b becomes invalid when the instance is moved because the location of member a has changed but b still points to the previous location. You can find a more comprehensive example of a self-referential type in the Rust Async book. All you need to know now is that an instance of MustBePinned should not be moved in memory. Types like MustBePinned do not implement the Unpin trait, which would allow them to move within memory safely. In other words, MustBePinned is !Unpin.

Back to the future: By default, a future is also !Unpin; thus, it should not be moved in memory. So how do you handle those types? You pin and box them.

The Pin type wraps pointer types, guaranteeing that the values behind the pointer won't be moved. The Pin type ensures this by not providing a mutable reference of the wrapped type. The type will be pinned for the lifetime of the object. If you accidentally pin a type that implements Unpin (which is safe to move), it won't have any effect.

In practice: If you want to return a future (!Unpin) from a function, you must box it. Using Box causes the type to be allocated on the heap instead of the stack and thus ensures that it can outlive the current function without being moved. In particular, if you want to hand over a future, you can only hand over a pointer to it as the future must be of type Pin>.

Using async-wait, you will certainly stumble upon this boxing and pinning syntax. To wrap this topic up, you just have to remember this:

  • Rust does not know whether a type can be safely moved.
  • Types that shouldn't be moved must be wrapped inside Pin.
  • Most types are Unpinned types. They implement the trait Unpin and can be freely moved within memory.
  • If a type is wrapped inside Pin and the wrapped type is !Unpin, it is not possible to get a mutable reference out of it.
  • Futures created by the async keyword are !Unpin and thus must be pinned.
Future trait

In the future trait, everything comes together:

pub trait Future {
    type Output;

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll;
}

Here is a simple example of how to implement the future trait:

struct  MyCounterFuture {
        cnt : u32,
        cnt_final : u32
}

impl MyCounterFuture {
        pub fn new(final_value : u32) -> Self {
                Self {
                        cnt : 0,
                        cnt_final : final_value
                }
        }
}
 
impl Future for MyCounterFuture {
        type Output = u32;

        fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll{
                self.cnt += 1;
                if self.cnt >= self.cnt_final {
                        println!("Counting finished");
                        return Poll::Ready(self.cnt_final);
                }

                cx.waker().wake_by_ref();
                Poll::Pending
        }
}

#[tokio::main]
async fn main(){
        let my_counter = MyCounterFuture::new(42);

        let final_value = my_counter.await;
        println!("Final value: {}", final_value);
}

Here is a simple example of how the future trait is implemented manually: The future is initialized with a value to which it shall count, stored in cnt_final. Each time the poll method is invoked, the internal value cnt gets incremented by one. If cnt is less than cnt_final, the future signals the waker of the runtime that the future is ready to be polled again. The return value of Poll::Pending signals that the future has not completed yet. After cnt is >= cnt_final, the poll function returns with Poll::Ready, signaling that the future has completed and providing the final value.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge The latest on edge

This is just a simple example, and of course, there are other things to take care of. If you consider creating your own futures, I highly suggest reading the chapter Async in depth in the documentation of the tokio crate.

Wrap up

Before I wrap things up, here is some additional information that I consider useful:

  • Create a new pinned and boxed type using Box::pin.
  • The futures crate provides the type BoxFuture which lets you define a future as return type of a function.
  • The async_trait allows you to define an async function in traits (which is currently not allowed).
  • The pin-utils crate provides macros to pin values.
  • The tokios try_join! macro (a)waits on multiple futures which return a Result.

Once the first hurdles have been overcome, async programming in Rust is straightforward. You don't even have to implement the future trait in your own types if you can outsource code that can be executed in parallel in an async function. In Rust, single-threaded and multi-threaded runtimes are available, so you can benefit from async programming even in embedded environments.

Take a look at how async-await works in Rust.

Image by:

Opensource.com

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Deploy applications using Foreman ACD

Wed, 10/12/2022 - 15:00
Deploy applications using Foreman ACD Maximilian Kolb Wed, 10/12/2022 - 03:00

When you manage your IT infrastructure using Foreman and Katello, the smallest unit to operate on is generally a host. You can provision hosts, deliver versioned content, and set configurations. Using Foreman ACD, you can use your Foreman instance to deploy applications consisting of multiple services spanning multiple hosts. This blog post briefly introduces the Foreman ACD plugin and explains how it can help you deploy a Prometheus and Grafana stack. If you want to know more about deploying an ELK stack consisting of an Elasticsearch cluster and Kibana, look at Deploying an ELK Cluster with Application Centric Deployment.

Introduction to Foreman and Katello

You can use Foreman and Katello to manage your IT infrastructure. Using Foreman generally starts with managing content. You can import content from upstream repositories, version and filter packages, mix repositories, and make it consumable for hosts. Next, you can provision hosts based on synchronized content. Using plugins, you can deploy to the cloud and on-premises solutions. The third step is to use configuration management tools, such as Ansible, to configure hosts. Configuration includes installing packages, creating users, specifying network settings, and more.

More on Ansible 5 reasons to migrate to Red Hat Ansible Automation Platform 2 A quickstart guide to Ansible Ansible cheat sheet Free online course: Ansible essentials Download and install Ansible eBook: The automated enterprise eBook: Ansible for DevOps Free Ansible eBooks Latest Ansible articles

Altogether, the traditional way focuses on single hosts or groups of similar hosts. Most frequently, host details are shared using so-called host groups in Foreman. They contain provisioning and configuration information such as compute resources, Ansible roles, operating system, provisioning templates, parameters, and more. You can think of them as "blueprints" for new hosts. Deploying an additional host based on a host that you have already deployed using a host group is as easy as entering a valid hostname.

But what if you want to provide a more user-friendly way to deploy applications? What if your application relies on several services requiring one or more hosts? Enter Foreman ACD.

Foreman ACD to the rescue

Traditional deployments focus on individual hosts, which are provisioned and configured based on host groups. Foreman ACD, short for Application Centric Deployment, is a Foreman plugin to deploy applications. It's developed and maintained by ATIX AG and is completely open source.

Image by:

(Maximilian Kolb, CC BY-SA 4.0)

The screenshot above shows how to deploy a Prometheus and Grafana cluster based on an Ansible playbook and a previously created application definition. For end users, deploying their application is as easy as entering host names and selecting the number of services as part of their application. For more information on the Prometheus and Grafana example, look at Deploying a Prometheus and Grafana Cluster Using Application Centric Deployment in the orcharhino blog.

What are the differences between host and application-centric approaches?

Both the traditional host-centric and the application-centric way share some procedures. They both start by preparing Foreman with your infrastructure, importing content, and creating necessary entities such as operating systems. After everything is ready, deployment and configuration information are bundled in host groups.

Here are two different approaches.

Host-centric approach
  1. Integrate Foreman into your infrastructure
  2. Import content
  3. Set up host groups
  4. Create hosts based on host groups
  5. Configure hosts using your automation software of choice (such as Ansible)
  6. Use configuration management to install software packages and configure services such as firewalls
Application-centric approach
  1. Integrate Foreman into your infrastructure
  2. Import content
  3. Set up host groups
  4. Fetch an ACD template consisting of an Ansible playbook and an application definition
  5. Create and deploy application instances.

Foreman ACD automates application deployments consisting of multiple services using an Ansible playbook and an application definition, which connects services to host groups and optionally defines host parameters. It requires the foreman_acd and smart_proxy_acd plugins, which are open source software. Packages are available at yum.theforeman.org.

Advantages of using Foreman ACD

Foreman ACD helps you to deploy complete applications with the click of a button. Foreman provisions hosts and automatically configures them after deployment. Each service is started on the defined group of hosts.

In terms of self-service, ACD helps you split users' responsibilities: You can assign the Application Centric Deployment Manager role to users that import the Ansible playbook and define the application definitions. End users with the Application Centric Deployment User role only have permission to deploy predefined application definitions. Note that end users can still, if allowed, set variables such as user accounts, ports, or the number of hosts per service in a predefined range.

Foreman ACD ensures a seamless deployment experience by handling inter-host connectivity. You can deploy multiple hosts simultaneously, all within a self-service-capable interface. This feature allows users with less technical knowledge or access rights to scale their applications vertically and/or horizontally.

Wrap up

If you have already configured Foreman and Katello to provision hosts and already have host groups bundling deployment and configuration information, using the Foreman ACD plugin is the next step to leverage your existing setup. You can conveniently deploy complete applications without connecting hosts manually.

Foreman ACD and Smart Proxy ACD are open source plugins for Foreman developed and maintained by ATIX AG. You can find the documentation at docs.theforeman.org > Application Centric Deployment. There are also several open source ACD playbooks, such as the ACD playbook for Elasticsearch cluster and Kibana and ACD playbook for Prometheus and Grafana. If you have questions, feedback, or suggestions, please open a thread on community.theforeman.org.

Our next ACD playbook helps you deploy Kubernetes. Follow the blog to read the upcoming announcement at orcharhino.com/news.

This demo explains how Foreman ACD can be used to deploy a Prometheus and Grafana stack.

Image by:

Opensource.com

Sysadmin Ansible What to read next How to get started with the Foreman sysadmin tool Managing deb content in Foreman This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages