Open-source News

FEX-Emu 2210 Eyes Emulating AVX On Arm, Various Fixes

Phoronix - Thu, 10/13/2022 - 20:10
FEX-Emu as the open-source project aiming for speedy x86/x86_64 games and other software on Arm AArch64 including the likes of Steam has issued their newest feature update. FEX-Emu 2210 is today's newest release for this binary emulator and continues on a nice trajectory for being able to enjoy x86 64-bit binaries on modern Arm Linux systems...

Debian 14 Codenamed "Forky"

Phoronix - Thu, 10/13/2022 - 18:10
The upcoming Debian GNU/Linux 12 release is codenamed "Bookworm" and is expected to be released in 2023. Meanwhile Debian 13 will be out around 2025 and it was already announced under the Trixie codename. Now today it's been announced that Debian 14 come 2027 will also be known as the "Forky" release...

RadeonSI Driver Lands Multi-Slice Video Encoding For AVC/HEVC

Phoronix - Thu, 10/13/2022 - 17:00
AMD has been on a streak recently of improving their open-source video acceleration capabilities for the RadeonSI Gallium3D driver...

LoongArch Picks Up New CPU Capabilities With Linux 6.1

Phoronix - Thu, 10/13/2022 - 16:00
While initial LoongArch CPU support merged in Linux 5.19, it was still in an immature state and since then missing features and functionality continue to be ironed out. With Linux 6.0 came LoongArch PCI support and other changes while for Linux 6.1 come additional features for this Chinese CPU architecture derived from MIPS64 and some elements of RISC-V...

Linux Directory Structure and Important Files Paths Explained

Tecmint - Thu, 10/13/2022 - 15:19
The post Linux Directory Structure and Important Files Paths Explained first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Brief: This article gives a breakdown of the Linux File System/directory structure, some of the critical files, their usability, and their location. You must have probably heard that everything is considered a file in

The post Linux Directory Structure and Important Files Paths Explained first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Picolibc 1.7.9 Adds Support For More CPU Targets

Phoronix - Thu, 10/13/2022 - 15:00
Longtime open-source developer Keith Packard has announced the release of Picolibc 1.7.9, the newest version of his C library for embedded systems. Picolibc 1.7.9 adds support for several new CPU architectures and other enhancements for his miniature libc implementation...

What you need to know about compiling code

opensource.com - Thu, 10/13/2022 - 15:00
What you need to know about compiling code Alan Smithee Thu, 10/13/2022 - 03:00

Source code must be compiled in order to run, and in open source software everyone has access to source code. Whether you've written code yourself and you want to compile and run it, or whether you've downloaded somebody's project to try it out, it's useful to know how to process source code through a compiler, and also what exactly a compiler does with all that code.

Build a better mousetrap

We don't usually think of a mousetrap as a computer, but believe it or not, it does share some similarities with the CPU running the device you're reading this article on. The classic (non-cat) mousetrap has two states: it's either set or released. You might consider that on (the kill bar is set and stores potential energy) and off (the kill bar has been triggered.) In a sense, a mousetrap is a computer that calculates the presence of a mouse. You might imagine this code, in an imaginary language, describing the process:

if mousetrap == 0 then There's a mouse! else There's no mouse yet. end

In other words, you can derive mouse data based on the state of a mousetrap. The mousetrap isn't foolproof, of course. There could be a mouse next to the mousetrap, and the mousetrap would still be registered as on because the mouse has not yet triggered the trap. So the program could use a few enhancements, but that's pretty typical.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications Switches

A mousetrap is ultimately a switch. You probably use a switch to turn on the lights in your house. A lot of information is stored in these mechanisms. For instance, people often assume that you're at home when the lights are on.

You could program actions based on the activity of lights on in your neighborhood. If all lights are out, then turn down your loud music because people have probably gone to bed.

A CPU uses the same logic, multiplied by several orders of measure, and shrunken to a microscopic level. When a CPU receives an electrical signal at a specific register, then some other register can be tripped, and then another, and so on. If those registers are made to be meaningful, then there's communication happening. Maybe a chip somewhere on the same motherboard becomes active, or an LED lights up, or a pixel on a screen changes color.

[ Related read 6 Python interpreters to try in 2022 ]

What comes around goes around. If you really want to detect a rodent in more places than the one spot you happen to have a mousetrap set, you could program an application to do just that. With a webcam and some rudimentary image recognition software, you could establish a baseline of what an empty kitchen looks like and then scan for changes. When a mouse enters the kitchen, there's a shift in the pixel values where there was previously no mouse. Log the data, or better yet trigger a drone that focuses in on the mouse, captures it, and moves it outside. You've built a better mousetrap through the magic of on and off signals.

Compilers

A code compiler translates human-readable code into a machine language that speaks directly to the CPU. It's a complex process because CPUs are legitimately complex (even more complex than a mousetrap), but also because the process is more flexible than it strictly "needs" to be. Not all compilers are flexible. There are some compilers that have exactly one target, and they only accept code files in a specific layout, and so the process is relatively straight-forward.

Luckily, modern general-purpose compilers aren't simple. They allow you to write code in a variety of languages, and they let you link libraries in different ways, and they can target several different architectures. The GNU C Compiler (GCC) has over 50 lines of options in its --help output, and the LLVM clang compiler has over 1000 lines in its --help output. The GCC manual contains over 100,000 words.

You have lots of options when you compile code.

Of course, most people don't need to know all the possible options. There are sections in the GCC man page I've never read, because they're for Objective-C or Fortran or chip architectures I've never even heard of. But I value the ability to compile code for several different architectures, for 64-bit and 32-bit, and to run open source software on computers the rest of the industry has left behind.

The compilation lifecycle

Just as importantly, there's real power to understanding the different stages of compiling code. Here's the lifecycle of a simple C program:

  1. C source with macros (.c) is preprocessed with cpp to render an .i file.

  2. C source code with expanded macros (.i) is translated with gcc to render an .s file.

  3. A text file in Assembly language (.s) is assembled with as into an .o file.

  4. Binary object code with instructions for the CPU, and with offsets not tied to memory areas relative to other object files and libraries (*.o) is linked with ld to produce an executable.

  5. The final binary file either has all required objects within it, or it's set to load linked dynamic libraries (*.so files).

And here's a simple demonstration you can try (with some adjustment for library paths):

$ cat << EOF >> hello.c
 #include
 int main(void)
 { printf("hello world\n");
   return 0; }
   EOF
$ cpp hello.c > hello.i
$ gcc -S hello.i
$ as -o hello.o hello.s
$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/5.5.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o hello.o \
/usr/lib64/crtn.o  --start-group -lc -lgcc \
-lgcc_eh --end-group
$ ./hello
hello worldAttainable knowledge

Computers have become amazingly powerful, and pleasantly user-friendly. Don't let that fool you into believing either of the two possible extremes: computers aren't as simple as mousetraps and light switches, but they also aren't beyond comprehension. You can learn about compiling code, about how to link, and compile for a different architecture. Once you know that, you can debug your code better. You can understand the code you download. You may even fix a bug or two. Or, in theory, you could build a better mousetrap. Or a CPU out of mousetraps. It's up to you.

Download our new eBook: An open source developer's guide to building applications

Use this handy mousetrap analogy to understand compiling code. Then download our new eBook, An open source developer's guide to building applications.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Linux Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Asynchronous programming in Rust

opensource.com - Thu, 10/13/2022 - 15:00
Asynchronous programming in Rust Stephan Avenwedde Thu, 10/13/2022 - 03:00

Asynchronous programming: Incredibly useful but difficult to learn. You can't avoid async programming to create a fast and reactive application. Applications with a high amount of file or network I/O or with a GUI that should always be reactive benefit tremendously from async programming. Tasks can be executed in the background while the user still makes inputs. Async programming is possible in many languages, each with different styles and syntax. Rust is no exception. In Rust, this feature is called async-await.

While async-await has been an integral part of Rust since version 1.39.0, most applications depend on community crates. In Rust, except for a larger binary, async-await comes with zero costs. This article gives you an insight into asynchronous programming in Rust.

Under the hood

To get a basic understanding of async-await in Rust, you literally start in the middle.

The center of async-await is the future trait, which declares the method poll (I cover this in more detail below). If a value can be computed asynchronously, the related type should implement the future trait. The poll method is called repeatedly until the final value is available.

At this point, you could repeatedly call the poll method from your synchronous application manually in order to get the final value. However, since I'm talking about asynchronous programming, you can hand over this task to another component: the runtime. So before you can make use of the async syntax, a runtime must be present. I use the runtime from the tokio community crate in the following examples.

A handy way of making the tokio runtime available is to use the #[tokio::main] macro on your main function:

#[tokio::main]
async fn main(){
    println!("Start!");
    sleep(Duration::from_secs(1)).await;
    println!("End after 1 second");
}

When the runtime is available, you can now await futures. Awaiting means that further executions stop here as long as the future needs to be completed. The await method causes the runtime to invoke the poll method, which will drive the future to completion.

In the above example, the tokios sleep function returns a future that finishes when the specified duration has passed. By awaiting this future, the related poll method is repeatedly called until the future completes. Furthermore, the main() function also returns a future because of the async keyword before the fn.

So if you see a function marked with async:

async fn foo() -> usize { /**/ }

Then it is just syntactic sugar for:

fn foo() -> impl Future<Output = usize> { async { /**/ } }Pinning and boxing

To remove some of the shrouds and clouds of async-await in Rust, you must understand pinning and boxing.

If you are dealing with async-await, you will relatively quickly step over the terms boxing and pinning. Since I find that the available explanations on the subject are rather difficult to understand, I have set myself the goal of explaining the issue more easily.

Sometimes it is necessary to have objects that are guaranteed not to be moved in memory. This comes into effect when you have a self-referential type:

struct MustBePinned {
    a: int16,
    b: &int16
}

If member b is a reference (pointer) to member a of the same instance, then reference b becomes invalid when the instance is moved because the location of member a has changed but b still points to the previous location. You can find a more comprehensive example of a self-referential type in the Rust Async book. All you need to know now is that an instance of MustBePinned should not be moved in memory. Types like MustBePinned do not implement the Unpin trait, which would allow them to move within memory safely. In other words, MustBePinned is !Unpin.

Back to the future: By default, a future is also !Unpin; thus, it should not be moved in memory. So how do you handle those types? You pin and box them.

The Pin type wraps pointer types, guaranteeing that the values behind the pointer won't be moved. The Pin type ensures this by not providing a mutable reference of the wrapped type. The type will be pinned for the lifetime of the object. If you accidentally pin a type that implements Unpin (which is safe to move), it won't have any effect.

In practice: If you want to return a future (!Unpin) from a function, you must box it. Using Box causes the type to be allocated on the heap instead of the stack and thus ensures that it can outlive the current function without being moved. In particular, if you want to hand over a future, you can only hand over a pointer to it as the future must be of type Pin>.

Using async-wait, you will certainly stumble upon this boxing and pinning syntax. To wrap this topic up, you just have to remember this:

  • Rust does not know whether a type can be safely moved.
  • Types that shouldn't be moved must be wrapped inside Pin.
  • Most types are Unpinned types. They implement the trait Unpin and can be freely moved within memory.
  • If a type is wrapped inside Pin and the wrapped type is !Unpin, it is not possible to get a mutable reference out of it.
  • Futures created by the async keyword are !Unpin and thus must be pinned.
Future trait

In the future trait, everything comes together:

pub trait Future {
    type Output;

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll;
}

Here is a simple example of how to implement the future trait:

struct  MyCounterFuture {
        cnt : u32,
        cnt_final : u32
}

impl MyCounterFuture {
        pub fn new(final_value : u32) -> Self {
                Self {
                        cnt : 0,
                        cnt_final : final_value
                }
        }
}
 
impl Future for MyCounterFuture {
        type Output = u32;

        fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll{
                self.cnt += 1;
                if self.cnt >= self.cnt_final {
                        println!("Counting finished");
                        return Poll::Ready(self.cnt_final);
                }

                cx.waker().wake_by_ref();
                Poll::Pending
        }
}

#[tokio::main]
async fn main(){
        let my_counter = MyCounterFuture::new(42);

        let final_value = my_counter.await;
        println!("Final value: {}", final_value);
}

Here is a simple example of how the future trait is implemented manually: The future is initialized with a value to which it shall count, stored in cnt_final. Each time the poll method is invoked, the internal value cnt gets incremented by one. If cnt is less than cnt_final, the future signals the waker of the runtime that the future is ready to be polled again. The return value of Poll::Pending signals that the future has not completed yet. After cnt is >= cnt_final, the poll function returns with Poll::Ready, signaling that the future has completed and providing the final value.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge The latest on edge

This is just a simple example, and of course, there are other things to take care of. If you consider creating your own futures, I highly suggest reading the chapter Async in depth in the documentation of the tokio crate.

Wrap up

Before I wrap things up, here is some additional information that I consider useful:

  • Create a new pinned and boxed type using Box::pin.
  • The futures crate provides the type BoxFuture which lets you define a future as return type of a function.
  • The async_trait allows you to define an async function in traits (which is currently not allowed).
  • The pin-utils crate provides macros to pin values.
  • The tokios try_join! macro (a)waits on multiple futures which return a Result.

Once the first hurdles have been overcome, async programming in Rust is straightforward. You don't even have to implement the future trait in your own types if you can outsource code that can be executed in parallel in an async function. In Rust, single-threaded and multi-threaded runtimes are available, so you can benefit from async programming even in embedded environments.

Take a look at how async-await works in Rust.

Image by:

Opensource.com

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

VirtIO-GPU Venus Driver Now Exposes Vulkan 1.3

Phoronix - Thu, 10/13/2022 - 08:24
Mesa's Venus driver provides Vulkan support for VirtIO-GPU since it was merged last year as a creation by Google. As of yesterday, the Venus driver has moved on to exposing Vulkan 1.3 capabilities...

Linux 6.1 Drops Old Driver For High Speed Serial / TTY Over IEEE-1394 Firewire

Phoronix - Thu, 10/13/2022 - 03:00
The staging changes for Linux 6.1 aren't particularly notable but of the code churn is lightening the kernel a bit by dropping the old "fwserial" driver that allows for TTY support over IEEE-1394 Firewire connections...

Pages