Open-source News

GCC 12 Compiler Optimization Tuning With The AMD Ryzen Threadripper 3990X

Phoronix - Tue, 07/19/2022 - 21:00
Following the recent discussions about -O3'ing the Linux kernel and other compiler optimizations, a request came in to see some fresh GCC compiler optimization benchmarks with the recently released GCC 12. So here is a fresh look at various GCC optimization levels up through -Ofast as well as with link-time optimizations (LTO) and "-march=native" tuning on the new GCC 12 with the mature AMD Ryzen Threadripper 3990X platform.

Intel Revs Its Linear Address Masking Patches For Linux

Phoronix - Tue, 07/19/2022 - 18:25
Added to Intel's documentation in late 2020 and initial kernel patches out since early 2021, Intel has been slowly working on Linear Address Masking (LAM) support for the Linux kernel. Out this past week was finally the latest iteration of this work for leveraging untranslated address bits of 64-bit linear addresses to be used for storing arbitrary software metadata...

Arm Working On Function Multi-Versioning For GCC

Phoronix - Tue, 07/19/2022 - 17:51
A feature supported by the GNU Compiler Collection (GCC) that sadly isn't used more often is function multi-versioning (FMV) for supporting multiple versions of a function with the given function selected based upon the target processor in use. GCC FMV on x86_64 allows for different functions to be used whether supporting SSE4.2, AVX, or even a particular CPU micro-architecture. Arm is finally working on GCC function multi-versioning support for AArch64...

Relaxed TLB Flushes Being Worked On For Linux As Another Performance Optimization

Phoronix - Tue, 07/19/2022 - 17:35
Nadav Amit who previously spearheaded work on reducing unnecessary TLB flushes, concurrent TLB flushes, and other low level optimizations over the years. The latest work is now on "relaxed" TLB flushes as another low-level performance improvement...

LightDM 1.32 Released With Various Fixes, Qt4 Support Finally Removed

Phoronix - Tue, 07/19/2022 - 17:17
Yesterday marked the release of LightDM 1.32 as the first official release of this display manager since 2019...

How to Install Wine 7.13 (Development Release) in Linux

Tecmint - Tue, 07/19/2022 - 15:00
The post How to Install Wine 7.13 (Development Release) in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Wine, a most popular and powerful open source application for Linux, that used to run Windows-based applications and games on the Linux Platform without any trouble. WineHQ team recently announced a new development version

The post How to Install Wine 7.13 (Development Release) in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Turn your Python script into a command-line application

opensource.com - Tue, 07/19/2022 - 15:00
Turn your Python script into a command-line application Mark Meyer Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

I've written, used, and seen a lot of loose scripts in my career. They start with someone that needs to semi-automate some task. After a while, they grow. They can change hands many times in their lifetime. I've often wished for a more command-line tool-like feeling in those scripts. But how hard is it really to bump the quality level from a one-off script to a proper tool? It turns out it's not that hard in Python.

Scaffolding

In this article, I start with a little Python snippet. I'll drop it into a scaffold module, and extend it with click to accept command-line arguments.

#!/usr/bin/python

from glob import glob
from os.path import join, basename
from shutil import move
from datetime import datetime
from os import link, unlink

LATEST = 'latest.txt'
ARCHIVE = '/Users/mark/archive'
INCOMING = '/Users/mark/incoming'
TPATTERN = '%Y-%m-%d'

def transmogrify_filename(fname):
    bname = basename(fname)
    ts = datetime.now().strftime(TPATTERN)
    return '-'.join([ts, bname])

def set_current_latest(file):
    latest = join(ARCHIVE, LATEST)
    try:
        unlink(latest)
    except:
        pass
    link(file, latest)

def rotate_file(source):
    target = join(ARCHIVE, transmogrify_filename(source))
    move(source, target)
    set_current_latest(target)

def rotoscope():
    file_no = 0
    folder = join(INCOMING, '*.txt')
    print(f'Looking in {INCOMING}')
    for file in glob(folder):
        rotate_file(file)
        print(f'Rotated: {file}')
        file_no = file_no + 1
    print(f'Total files rotated: {file_no}')

if __name__ == '__main__':
    print('This is rotoscope 0.4.1. Bleep, bloop.')
    rotoscope()

More Python resources What is an IDE? Cheat sheet: Python 3.7 for beginners Top Python GUI frameworks Download: 7 essential PyPI libraries Red Hat Developers Latest Python articles

All non-inline code samples in this article refer to a specific version of the code you can find at https://codeberg.org/ofosos/rotoscope. Every commit in that repo describes some meaningful step in the course of this how-to article.

This snippet does a few things:

  • Check whether there are any text files in the path specified in INCOMING
  • If it exists, it creates a new filename with the current timestamp and moves the file to ARCHIVE
  • Delete the current ARCHIVE/latest.txt link and create a new one pointing to the file just added

As an example, this is pretty small, but it gives you an idea of the process.

Create an application with pyscaffold

First, you need to install the scaffold, click, and tox Python modules.

$ python3 -m pip install scaffold click tox

After installing scaffold, change to the directory where the example rotoscope project resides, and then execute the following command:

$ putup rotoscope -p rotoscope \
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown

Pyscaffold overwrote my README.md, so restore it from Git:

$ git checkout README.md

Pyscaffold set up a complete sample project in the docs hierarchy, which I won't cover here but feel free to explore it later. Besides that, Pyscaffold can also provide you with continuous integration (CI) templates in your project.

  • packaging: Your project is now PyPi enabled, so you can upload it to a repo and install it from there.
  • documentation: Your project now has a complete docs folder hierarchy, based on Sphinx and including a readthedocs.org builder.
  • testing: Your project can now be used with the tox test runner, and the tests folder contains all necessary boilerplate to run pytest-based tests.
  • dependency management: Both the packaging and test infrastructure need a way to manage dependencies. The setup.cfg file solves this and includes dependencies.
  • pre-commit hook: This includes the Python source formatter "black" and the "flake8" Python style checker.

Take a look into the tests folder and run the tox command in the project directory. It immediately outputs an error. The packaging infrastructure cannot find your package.

Now create a Git tag (for instance, v0.2) that the tool recognizes as an installable version. Before committing the changes, take a pass through the auto-generated setup.cfg and edit it to suit your use case. For this example, you might adapt the LICENSE and project descriptions. Add those changes to Git's staging area, I have to commit them with the pre-commit hook disabled. Otherwise, I'd run into an error because flake8, Python style checker, complains about lousy style.

$ PRE_COMMIT_ALLOW_NO_CONFIG=1 git commit

It would also be nice to have an entry point into this script that users can call from the command line. Right now, you can only run it by finding the .py file and executing it manually. Fortunately, Python's packaging infrastructure has a nice "canned" way to make this an easy configuration change. Add the following to the options.entry_points section of your setup.cfg:

console_scripts =
    roto = rotoscope.rotoscope:rotoscope

This change creates a shell command called roto, which you can use to call the rotoscope script. Once you install rotoscope with pip, you can use the roto command.

That's that. You have all the packaging, testing, and documentation setup for free from Pyscaffold. You also got a pre-commit hook to keep you (mostly) honest.

CLI tooling

Right now, there are values hardcoded into the script that would be more convenient as command arguments. The INCOMING constant, for instance, would be better as a command-line parameter.

First, import the click library. Annotate the rotoscope() method with the command annotation provided by Click, and add an argument that Click passes to the rotoscope function. Click provides a set of validators, so add a path validator to the argument. Click also conveniently uses the function's here-string as part of the command-line documentation. So you end up with the following method signature:

@click.command()
@click.argument('incoming', type=click.Path(exists=True))
def rotoscope(incoming):
    """
    Rotoscope 0.4 - Bleep, blooop.
    Simple sample that move files.
    """

The main section calls rotoscope(), which is now a Click command. It doesn't need to pass any parameters.

Options can get filled in automatically by environment variables, too. For instance, change the ARCHIVE constant to an option:

@click.option('archive', '--archive', default='/Users/mark/archive', envvar='ROTO_ARCHIVE', type=click.Path())

The same path validator applies again. This time, let Click fill in the environment variable, defaulting to the old constant's value if nothing's provided by the environment.

Click can do many more things. It has colored console output, prompts, and subcommands that allow you to build complex CLI tools. Browsing through the Click documentation reveals more of its power.

Now add some tests to the mix.

Testing

Click has some advice on running end-to-end tests using the CLI runner. You can use this to implement a complete test (in the sample project, the tests are in the tests folder.)

The test sits in a method of a testing class. Most of the conventions follow what I'd use in any other Python project very closely, but there are a few specifics because rotoscope uses click. In the test method, I create a CliRunner. The test uses this to run the command in an isolated file system. Then the test creates incoming and archive directories and a dummy incoming/test.txt file within the isolated file system. Then it invokes the CliRunner just like you'd invoke a command-line application. After the run completes, the test examines the isolated filesystem and verifies that incoming is empty, and that archive contains two files (the latest link and the archived file.)

from os import listdir, mkdir
from click.testing import CliRunner
from rotoscope.rotoscope import rotoscope

class TestRotoscope:
    def test_roto_good(self, tmp_path):
        runner = CliRunner()

        with runner.isolated_filesystem(temp_dir=tmp_path) as td:
            mkdir("incoming")
            mkdir("archive")
            with open("incoming/test.txt", "w") as f:
                f.write("hello")

            result = runner.invoke(rotoscope, ["incoming", "--archive", "archive"])
            assert result.exit_code == 0

            print(td)
            incoming_f = listdir("incoming")
            archive_f = listdir("archive")
            assert len(incoming_f) == 0
            assert len(archive_f) == 2

To execute these tests on my console, run tox in the project's root directory.

During implementing the tests, I found a bug in my code. When I did the Click conversion, rotoscope just unlinked the latest file, whether it was present or not. The tests started with a fresh file system (not my home folder) and promptly failed. I can prevent this kind of bug by running in a nicely isolated and automated test environment. That'll avoid a lot of "it works on my machine" problems.

Scaffolding and modules

This completes our tour of advanced things you can do with scaffold and click. There are many possibilities to level up a casual Python script, and make even your simple utilities into full-fledged CLI tools.

With scaffold and click in Python, you can level up even a simple utility into a full-fledged command-line interface tool.

Image by:

Opensource.com

Python Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Code your first React UI app

opensource.com - Tue, 07/19/2022 - 15:00
Code your first React UI app Jessica Cherry Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

Who wants to create their first UI app? I do, and if you're reading this article, I assume you do, too. In today's example, I'll use some JavaScript and the API with Express I demonstrated in my previous article. First, let me explain some of the tech you're about to use.

What is React?

React is a JavaScript library for building a user interface (UI). However, you need more than just the UI library for a functional UI. Here are the important components of the JavaScript web app you're about to create:

  • npx: This package is for executing npm packages.
  • axios: A promise-based HTTP client for the browser and node.js. A promise is a value that an API endpoint will provide.
  • http-proxy-middleware: Configures proxy middleware with ease. A proxy is middleware that helps deal with messaging back and forth from the application endpoint to the requester.
Preconfiguration

If you haven't already, look at my previous article. You'll use that code as part of this React app. In this case, you'll add a service to use as part of the app. As part of this application, you have to use the npx package to create the new folder structure and application:

$ npx create-react-app count-ui
npx: installed 67 in 7.295s

Creating a new React app in /Users/cherrybomb/count-ui.

Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template...
[...]
Installing template dependencies using npm...
+ @testing-library/jest-dom@5.16.4
+ @testing-library/user-event@13.5.0
+ web-vitals@2.1.4
+ @testing-library/react@13.3.0
added 52 packages from 109 contributors in 9.858s
[...]
Success! Created count-ui at /Users/cherrybomb/count-ui
[...]
We suggest that you begin by typing:

  cd count-ui
  npm start

As you can see, the npx command has created a new template with a folder structure, an awesome README file, and a Git repository. Here's the structure:

$ cd count-ui/
/Users/cherrybomb/count-ui

$ ls -A -1
.git
.gitignore
README.md
node_modules
package-lock.json
package.json
public
src

This process also initialized the Git repo and set the branch to master, which is a pretty cool trick. Next, install the npm packages:

$ npm install axios http-proxy-middleware
[...]
npm WARN @apideck/better-ajv-errors@0.3.4 requires a peer of ajv@>=8 but none is installed. You must install peer dependencies yourself.
+ http-proxy-middleware@2.0.6
+ axios@0.27.2
added 2 packages from 2 contributors, updated 1 package and audited 1449 packages in 5.886s

Now that those are set up, add your services, and main.js file:

$ mkdir src/services
src/services

$ touch src/services/main.js

Preconfiguration is now complete, so you can now work on coding.

Code a UI from start to finish

Now that you have everything preconfigured, you can put together the service for your application. Add the following code to the main.js file:

import axios from 'axios';
const baseURL = 'http://localhost:5001/api';
export const get = async () => await axios.get(`${baseURL}/`);
export const increment = async () => await axios.post(`${baseURL}/`);
export default {
    get,
    increment
}

This process creates a JavaScript file that interacts with the API you created in my previous article.

Set up the proxy

Next, you must set up a proxy middleware by creating a new file in the src directory.

$ touch src/setupProxy.js

Configure the proxy with this code in setupProxy.js:

const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
  app.use(
    '/api',
    createProxyMiddleware({
      target: 'http://localhost:5000',
      changeOrigin: true,
    })
  );
};

In this code, the app.use function specifies the service to use as /api when connecting to the existing API project. However, nothing defines api in the code. This is where a proxy comes in. With a proxy, you can define the api function on the proxy level to interact with your Express API. This middleware registers requests between both applications because the UI and API use the same host with different ports. They require a proxy to transfer internal traffic.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java JavaScript imports

In your base src directory, you see that the original template created an App.js, and you must add main.js (in the services directory) to your imports in the App.js file. You also need to import React on the very first line, as it is external to the project:

import React from 'react'
import main from './services/main';Add the rendering function

Now that you have your imports, you must add a render function. In the App() function of App.js, add the first section of definitions for react and count before the return section. This section gets the count from the API and puts it on the screen. In the return function, a button provides the ability to increment the count.

function App() {
const [count, setCount] = React.useState(0);
React.useEffect(()=>{
  async function fetchCount(){
    const newCount = (await main.get()).data.count;
    setCount(newCount);
  }
  fetchCount();
}, [setCount]);
return (  
    <div className="App">
      <header className="App-header">
        <h4>
          {count}
        h4>
        <button onClick={async ()=>{
          setCount((await main.increment()).data.count);
        }}>
          Increment
        button>
      header>
    div>
  );
}

To start and test the app, run npm run start. You should see the output below. Before running the application, confirm your API is running from the Express app by running node ./src/index.js.

$ npm run start
> count-ui@0.1.0 start /Users/cherrybomb/count-ui
> react-scripts start

[HPM] Proxy created: /  -> http://localhost:5000
(node:71729) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:71729) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
Starting the development server...

Once everything is running, open your browser to localhost:5000 to see the front end has a nice, admittedly minimal, page with a button:

Image by:

(Jessica Cherry, CC BY-SA 4.0)

What happens when you press the button? (Or, in my case, press the button several times.)

Image by:

(Jessica Cherry, CC BY-SA 4.0)

The counter goes up!

Congratulations, you now have a React app that uses your new API.

Web apps and APIs

This exercise is a great way to learn how to make a back end and a front end work together. It's noteworthy to say that if you're using two hosts, you don't need the proxy section of this article. Either way, JavaScript and React are a quick, templated way to get a front end up and running with minimal steps. Hopefully, you enjoyed this walk-through. Tell us your thoughts on learning how to code in JavaScript.

Learn to make back-end and front-end development work together with this JavaScript tutorial.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

JavaScript Programming What to read next Create a JavaScript API in 6 minutes This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Event-driven architecture explained in a coloring book

opensource.com - Tue, 07/19/2022 - 15:00
Event-driven architecture explained in a coloring book Seth Kenlon Tue, 07/19/2022 - 03:00 Register or Login to like Register or Login to like

"Explain it to me like I'm five."

When you want someone to get to the point as efficiently and as clearly as possible, that's what you say. Following that logic, you might be compelled to ponder the most powerful tool the average, everyday 5-year old wields: coloring books. What better way than a coloring book to transform a droll slideshow presentation into a fun and educational journey?

That's what artists Máirín Duffy and Madeline Peck thought, anyway, and it has turned out to be accurate. In the past, Máirín has helped produce five open source coloring books to help explain advanced topics including SELinux, Containers, Ansible, and more. It's a fun and easy way to learn about emerging technology, and you can either color your lessons yourself or hand it over to a resident specialist (an actual 5-year old) for project completion.

The latest coloring book in the series is all about event driven architecture (EDA). As with all the previous coloring books, this one's not only free to download, but it's also open source. You can download the sources and assemble it yourself, or learn from the files so you can build your own about topics important to you.

Event-driven architecture is no small topic, so I sat down with Máirín and Madeline to discover how and why they took on the challenge.

Q: Presumably, you don't spend your days developing KNative serverless applications and pipelines. How do you learn so much about such a complex topic?

Máirín Duffy: I wrote the script for the coloring book. I have a lot of experience with OS-level technology, and I have experience working in teams that deploy applications as a service, but I do not have as much experience working with running and managing Kubernetes directly. And the concept of "serverless" was one I only knew of in passing.

Our colleague Kamesh Sampath gave a presentation he called Knative and the Three Dwarves.
That gave us the idea to relate our story to Snow White. In fact, we use material from Kamesh's talk to serve as the basic scope of the technologies and technical scenarios we wanted to talk about.
All of the coloring books use an analogy of some form to help readers new to the technology relate to it using concepts they are likely to understand already or be familiar with.

For the EDA coloring book, we used the familiar fairy tale of Snow White and the Seven Dwarves and the analogy of running a bakery to explain the concepts of what it means to be serverless, and what the specific Kubernetes serverless components Tekton, Serve Knative, and Event Knative are, and what they do.

In preparing to write a script for the book, I watched Kamesh's presentation, wrote out the questions I had, and met with Kamesh. He is a very gifted teacher and was able to answer all of my questions and help me feel comfortable about the subject matter. I formed an informal technical review board for the book. We have access to a lot of amazingly smart technology experts through Fedora and Red Hat, and they were excited about having a book like this available, so we got quite a few volunteers.

I bounced ideas off of them. I spent a lot of time pestering Langdon White, and we narrowed down on the concept of Snow White running a bakery and the scenarios of demonstrating auto-scaling (scaling the production of different baked goodies up and down based on the holidays), self-healing based on events (ordering new eggs when the supply is low), shutting down an app that isn't being used and spinning it up on demand (the cupcake decorator scenario), rolling back issues in production (the poisoned apple detector.)

I wrote up an initial draft, and then the technical review board reviewed it and provided a ton of suggestions and tweaks. We did another round, and I finalized the script so that Madeline could start illustrating.

Madeline Peck: That's where I come in. I was lucky: I was presented with the finished version of the script, so the coloring book taught me what I needed to know. The great technical writers who helped give feedback on the script and visuals correlating were a great help with this admittedly complex topic.

Máirín Duffy: And as Madeline completed storyboards, and then the initial draft of the fully illustrated book, we had a couple more technical board reviews to make sure it still all made sense.

Q: That's a lot more work than I realized. So how long does it take to create a coloring book?

Madeline Peck: This one took a lot longer because it was the first coloring book I had worked on. Mo has been churning them out for some time now, and has a great grasp on all the open source programs like Inkscape and Scribus that we use, as well as the connections and knowledge for topics that can be expanded upon in a simple but informative manner. This book started when I was an intern, and it's taught me a lot about each step in the process, as well as all the ways open source matters for projects like these.

Q: What tools do you use when you draw?

Madeline Peck: When I draw digitally, I use variations of different ink pens. But on paper, traditionally I use a color erase red pencil for sketching, a Pigma Micron 01 pen for inking (because it's water proof), and occasionally I add color with watercolors from Mijello.

Q: I don't work with physical materials often, and I don't have a kid to do the coloring in for me, but I'm enjoying using this as a digital coloring book. I've imported pages into Krita and it's given me the opportunity to experiment with different brushes and color mixing techniques.

Madeline Peck: I think Krita is a great coloring application! There's a great variety of brushes and tools. I used Krita for all the primary sketching for the frames in the coloring book. If people don't know, when you import PNGs into programs like Krita, you can set the layer mode with the image to multiply instead of normal. Then you can add a layer below it, and it's just like coloring in below the lines without the white background.

Q: Is it harder to draw things without considering color and shading? Does it feel incomplete to you?

Madeline Peck: I don't think so! There's a lot of gorgeous art in the world where artists only rely on line work. The weight of the lines, the way they interact — it's just another technique. It doesn't feel incomplete because I know there's going to be lots of people who are going to share pages of the book colored in their own way, which is really exciting!

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles

Q: Who's this really meant for? Can people actually learn about going serverless from a coloring book?

Máirín Duffy: Another good question. We started this whole "coloring books to explain technology" thing when Dan Walsh came into my cube at Red Hat Westford almost 10 years ago and asked if I could draw him some illustrations for his SELinux dogfood analogy. He had come up with this analogy having had to explain how SELinux concepts worked repeatedly. He also found it to be an effective analogy in many presentations.

That coloring book was super basic compared to the EDA coloring book, but the bones are the same — making complex technology concepts less intimidating and more approachable with simple analogies and narrative. We have gotten overwhelming feedback over a long period of time that these coloring books have been very helpful in teaching about the technology. I've had customers tell me that they've been able to use specific coloring books to help explain the technology to their managers, and that they are a really non-intimidating way to get a good initial understanding.

Madeline Peck: I agree. The coloring books are meant for a variety of readers, with a wide range of prior knowledge on the subject. They can be used for people who have friends and family who work on serverless applications, for those working on the actual teams, or people who work adjacent to those developers.

Máirín Duffy: They also make a great handout on a conference expo floor, at talks, and even virtually as PDFs. Even if EDA isn't your thing, you can pick it up and your kids can have fun coloring the characters. I really do hope people can read this book and better understand what serverless is and that it could spark an interest for them to look more in depth into serverless and EDA processes.

Get your copy

I love that there are free and open source coloring books that appeal to both kids needing something fun to color in, and the older crowd looking for clear and simple explanations of complex tech topics.

A lot of creativity goes into making these coloring books, but as with most open source endeavours, it inspires yet more creativity once it's in the hands of users.

Grab your copy of the Event-driven Architecture coloring book today! Download the PDF directly here

Event-driven architecture is no small topic. A coloring book is a perfect way to explain its complexity in a friendly manner. Download this coloring book about event-driven architecture

Image by:

Opensource.com

Open Studio What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 896 points Boston, Massachusetts USA

Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape (http://inkscape.org).

| Follow mairin Open Source Evangelist People's Choice Award Author Founding Member Register or Login to post a comment.

Pages