Open-source News

The First RISC-V Laptop Announced With Quad-Core CPU, 16GB RAM, Linux Support

Phoronix - Sat, 07/02/2022 - 01:55
RISC-V International has relayed word to us that in China the DeepComputing and Xcalibyte organizations have announced pre-orders on the first RISC-V laptop intended for developers. The "ROMA" development platform features a quad-core RISC-V processor, up to 16GB of RAM, up to 256GB of storage, and should work with most RISC-V Linux distributions...

How Microservices Work Together

The Linux Foundation - Fri, 07/01/2022 - 22:03

The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that actual tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The real world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balance between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two general rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their real world loads and communication needs will be.

The post How Microservices Work Together appeared first on Linux Foundation.

HP Dev One With Ryzen 7 PRO 5850U Competes Well Against Intel's Core i7 1280P "Alder Lake P" On Linux

Phoronix - Fri, 07/01/2022 - 20:48
With my review last month of the HP Dev One laptop powered by an AMD Ryzen 7 PRO 5850U and running Pop!_OS I benchmarked it against various laptops I had locally with both AMD and Intel CPUs, including the likes of the very common Tiger Lake SoCs. At the time I hadn't any newer Alder Lake P laptops but now with a Core i7 1280P laptop in hand, here is a look at how that AMD Cezanne Linux laptop can compete with Intel's brand new Alder Lake P SoCs with the flagship Core i7 1280P.

DBA at QES - IT-Online

Google News - Fri, 07/01/2022 - 20:41
DBA at QES  IT-Online

XWayland "Rootfull" Changes Merged For Running A Complete Desktop Environment

Phoronix - Fri, 07/01/2022 - 18:00
While XWayland is normally used just for running root-less single applications like games within an otherwise native Wayland desktop, new patches from Red Hat that have been merged into the X.Org Server enhance XWayland's existing "root-full" mode of operation for allowing entire desktop environments and window managers to nicely function within the context of XWayland...

New Activity Around Adapting ACO Compiler Back-End For RadeonSI

Phoronix - Fri, 07/01/2022 - 17:08
As part of the work on the Mesa Radeon Vulkan "RADV" driver, Valve engineers developed the "ACO" compiler back-end that is now used by default for RADV and has shown to deliver better performance at least for RADV than using AMD's official AMDGPU LLVM shader compiler back-end. There has long been talk about adding ACO support to RadeonSI while in recent weeks there has been new code activity on that front...

Rust For Linux, -O3'ing The Kernel & Other Highlights From June

Phoronix - Fri, 07/01/2022 - 16:57
During the past month there was a lot of exciting Linux kernel activity, the launch of the HP Dev One, never-ending open-source graphics driver advancements, and much more -- in addition to marking Phoronix turning 18 years old. Here is a look back at the June highlights...

Intel Releases libva 2.15 Video Acceleration Library

Phoronix - Fri, 07/01/2022 - 16:39
Intel on Friday released libva 2.15 as the newest update to the open-source Video Acceleration API (VA-API) library used on modern systems for GPU-accelerated video decoding...

How to Install Packages on RHEL 8 Locally Using DVD ISO

Tecmint - Fri, 07/01/2022 - 15:54
The post How to Install Packages on RHEL 8 Locally Using DVD ISO first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Often, when we want to have a local repository for our RHEL 8 system to install packages without internet access for extra safety and using RHEL 8 ISO is the easiest way to do

The post How to Install Packages on RHEL 8 Locally Using DVD ISO first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Why I switched from Apple Music to Jellyfin and Raspberry Pi

opensource.com - Fri, 07/01/2022 - 15:00
Why I switched from Apple Music to Jellyfin and Raspberry Pi DJ Billings Fri, 07/01/2022 - 03:00 Register or Login to like Register or Login to like

One day earlier this year, I looked up a song in my Mac's music library that's been there since 2001. I received an error message, "This song is not currently available in your country or region." I thought this might be just a glitch on my iPhone, so I tried the desktop app. No go. I opened up my media drive, and there was the music file. To check if it played, I hit the spacebar, and it began to play immediately. Hrmph. I have the file, I thought. Why won't the Music app play it?

Image by:

(DJ Billings, CC BY-SA 40)

After some digging, I found other users with similar issues. To sum up, it seems that Apple decided that it owned some of my songs, even though I ripped this particular song to an MP3 from my own CD in the late 1990s.

To be clear, I'm not an Apple Music subscriber. I'm referring to the free "music" app that used to be called iTunes. I gave Apple Music a go when it first launched but quickly abandoned it. They decided to replace my previously owned songs with their DRM versions. In fact, I believe that's where my messed-up music troubles began. Since then, I've been bombarded with pushy Apple notifications trying to steer me back into becoming an Apple Music subscriber.

The sales notifications were annoying, but this suddenly unplayable song was unacceptable. I knew there had to be a better way to manage my music, one that put me in control of the music and movie files I already owned.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Searching for a new open source media solution

After this incident, I naturally took to social media to air my grievances. I also made a short list of needs I had for what I thought was the ideal solution:

  • It needs to be open source and run on Linux.
  • I want to run it on my own server, if possible.
  • It should be free (as in beer) if possible.
  • I want the ability to control how the media is organized.
  • I want to be able to watch my movies on my TV as well as listen to music.
  • It should work from home (WiFi) and over the internet.
  • It should be cross-platform accessible (Linux, Mac OS, Windows, Android, iOS).

A tall order, I know. I wasn't sure I'd get everything I wanted, but I thought aiming for the stars was better than settling for something quick and easy. A few people suggested Jellyfin, so I decided to check it out, but without much optimism considering the amount of rabbit holes I'd already been down.

What I discovered was unbelievable. Jellyfin fulfilled every item on my list. Better still, I found that I could use it with my Raspberry Pi. I jumped onboard the Jellyfin train and haven't looked back.

Raspberry Pi and Jellyfin are the perfect combination

I will describe what I did, but this is not intended to be a complete tutorial. Believe me when I say that if I can do it, so can you.

Raspberry Pi 4

I used a Raspberry Pi 4 Model B with 4GB of RAM. The SD card is 128GB, which is more than I need. The Pi 4 has WiFi but it's connected to my router using ethernet, so there's less lag.

One of the things I love about the Raspberry Pi is the ability to swap out the entire OS and storage by slipping in a new SD card. You can switch back in a few seconds if the OS doesn't suit you.

Western Digital Elements 2 TB external SSD

Since all of my media won't fit on a 128GB SD card, an external drive was essential. I also like having my media on a drive separate from my OS. I previously used a 2TB external HD from Seagate that worked fine. I was trying to keep my budget low, but I also wanted an SSD, one with a small footprint this time. The Western Digital drive is tiny, fast, and perfect. To work with the Raspberry Pi, I had to format the drive as exFAT and add a package to help the Pi mount it.

Jellyfin

I can't say enough good things about Jellyfin. It ticks all the boxes for me. It's open source, 100% free, has no central server, data collection, or tracking. It also plays all of the music, movies, and TV shows I have on my drive.

There are clients for just about every platform, or you can listen or view in your web browser. Currently, I'm listening to my music on the app for Debian and Ubuntu and it works great.

Image by:

(DJ Billings, CC BY-SA 40)

Setting up Jellyfin

Many people, more brilliant than I, have created detailed instructions on Jellyfin's setup, so I would rather point to their work. Plus, Jellyfin has excellent documentation. But I'll lay out the basics, so you know what to expect if you want to do this yourself.

Command-line

First, you'll need to be confident using the terminal to write commands or be willing to learn. I encourage trying it because I've become highly skilled and confident in Bash just by doing this project.

File organization

It's a good idea to have your media files well-organized before you start. Changing things later is possible, but you'll have fewer issues with Jellyfin recognizing your files if they're categorized well.

Jellyfin uses the MusicBrainz and AudioDb databases to recognize your files and I've found very few errors. Seeing the covers for movies and music populate after it finds your catalog is very satisfying. I've had to upload my artwork a few times, but it's an easy process. You can also replace the empty or generic category images with your own art.

Users

You can add users and adjust their level of control. For example, in my family, I'm the only one with the ability to delete music. There are also parental controls available.

Process and resources

Here's the general process and some of the resources I used to set up my Raspberry Pi media server using Jellyfin:

  1. Install the OS of your choice on your Pi.

  2. Install Jellyfin on your Pi.

  3. If you're using a big external drive for storage, format it so that it uses a file system usable by you Pi, but also convenient for you. I've found exFAT to be the easiest file system of all the major platforms to use.

  4. Configure the firewall on your Pi so that other computers can access the Jellyfin library.

  5. On your personal computer install a Jellyfin Media Player.

Breaking away

Whenever someone finds an open source solution, an angel gets its wings. The irony is that I was pushed into finding a non-proprietary solution by one of the biggest closed source companies on the planet. What I love most about the system I've created is that I am in control of all aspects of it, good and bad.

Jellyfin fulfills everything on my media library wishlist, making it the ideal open source alternative to Apple Music and other proprietary software tools.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Raspberry Pi Audio and music Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages