Open-source News

How to Download and Install RHEL 9 for Free

Tecmint - Wed, 06/01/2022 - 15:07
The post How to Download and Install RHEL 9 for Free first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Red Hat Enterprise Linux 9 (RHEL 9), code-named Plow, is now generally available (GA). Red Hat made the announcement on the 18th of May 2022. It takes over from the Beta release which has

The post How to Download and Install RHEL 9 for Free first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

RISC-V With Linux 5.19 Allows Running RV32 32-bit Binaries On RV64, Adds Svpbmt

Phoronix - Wed, 06/01/2022 - 15:00
On Tuesday the RISC-V architecture changes were merged into the in-development Linux 5.19 kernel with several new features in tow...

A visual guide to Kubernetes networking fundamentals

opensource.com - Wed, 06/01/2022 - 15:00
A visual guide to Kubernetes networking fundamentals Nived Velayudhan Wed, 06/01/2022 - 03:00 Register or Login to like Register or Login to like

Moving from physical networks using switches, routers, and ethernet cables to virtual networks using software-defined networks (SDN) and virtual interfaces involves a slight learning curve. Of course, the principles remain the same, but there are different specifications and best practices. Kubernetes has its own set of rules, and if you're dealing with containers and the cloud, it helps to understand how Kubernetes networking works.

The Kubernetes Network Model has a few general rules to keep in mind:

  1. Every Pod gets its own IP address: There should be no need to create links between Pods and no need to map container ports to host ports.
  2. NAT is not required: Pods on a node should be able to communicate with all Pods on all nodes without NAT.
  3. Agents get all-access passes: Agents on a node (system daemons, Kubelet) can communicate with all the Pods in that node.
  4. Shared namespaces: Containers within a Pod share a network namespace (IP and MAC address), so they can communicate with each other using the loopback address.
What Kubernetes networking solves

Kubernetes networking is designed to ensure that the different entity types within Kubernetes can communicate. The layout of a Kubernetes infrastructure has, by design, a lot of separation. Namespaces, containers, and Pods are meant to keep components distinct from one another, so a highly structured plan for communication is important.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

Container-to-container networking

Container-to-container networking happens through the Pod network namespace. Network namespaces allow you to have separate network interfaces and routing tables that are isolated from the rest of the system and operate independently. Every Pod has its own network namespace, and containers inside that Pod share the same IP address and ports. All communication between these containers happens through localhost, as they are all part of the same namespace. (Represented by the green line in the diagram.)

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles Pod-to-Pod networking

With Kubernetes, every node has a designated CIDR range of IPs for Pods. This ensures that every Pod receives a unique IP address that other Pods in the cluster can see. When a new Pod is created, the IP addresses never overlap. Unlike container-to-container networking, Pod-to-Pod communication happens using real IPs, whether you deploy the Pod on the same node or a different node in the cluster.

The diagram shows that for Pods to communicate with each other, the traffic must flow between the Pod network namespace and the Root network namespace. This is achieved by connecting both the Pod namespace and the Root namespace by a virtual ethernet device or a veth pair (veth0 to Pod namespace 1 and veth1 to Pod namespace 2 in the diagram). A virtual network bridge connects these virtual interfaces, allowing traffic to flow between them using the Address Resolution Protocol (ARP).

When data is sent from Pod 1 to Pod 2, the flow of events is:

  1. Pod 1 traffic flows through eth0 to the Root network namespace's virtual interface veth0.
  2. Traffic then goes through veth0 to the virtual bridge, which is connected to veth1.
  3. Traffic goes through the virtual bridge to veth1.
  4. Finally, traffic reaches the eth0 interface of Pod 2 through veth1.
Pod-to-Service networking

Pods are very dynamic. They may need to scale up or down based on demand. They may be created again in case of an application crash or a node failure. These events cause a Pod's IP address to change, which would make networking a challenge.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

Kubernetes solves this problem by using the Service function, which does the following:

  1. Assigns a static virtual IP address in the frontend to connect any backend Pods associated with the Service.
  2. Load-balances any traffic addressed to this virtual IP to the set of backend Pods.
  3. Keeps track of the IP address of a Pod, such that even if the Pod IP address changes, the clients don't have any trouble connecting to the Pod because they only directly connect with the static virtual IP address of the Service itself.

The in-cluster load balancing occurs in two ways:

  1. IPTABLES: In this mode, kube-proxy watches for changes in the API Server. For each new Service, it installs iptables rules, which capture traffic to the Service's clusterIP and port, then redirects traffic to the backend Pod for the Service. The Pod is selected randomly. This mode is reliable and has a lower system overhead because Linux Netfilter handles traffic without the need to switch between userspace and kernel space.
  2. IPVS: IPVS is built on top of Netfilter and implements transport-layer load balancing. IPVS uses the Netfilter hook function, using the hash table as the underlying data structure, and works in the kernel space. This means that kube-proxy in IPVS mode redirects traffic with lower latency, higher throughput, and better performance than kube-proxy in iptables mode.

The diagram above shows the package flow from Pod 1 to Pod 3 through a Service to a different node (marked in red). The package traveling to the virtual bridge would have to use the default route (eth0) as ARP running on the bridge wouldn't understand the Service. Later, the packages have to be filtered by iptables, which uses the rules defined in the node by kube-proxy. Therefore the diagram shows the path as it is.

Internet-to-Service networking

So far, I have discussed how traffic is routed within a cluster. There's another side to Kubernetes networking, though, and that's exposing an application to the external network.

Image by:

(Nived Velayudhan, CC BY-SA 4.0)

You can expose an application to an external network in two different ways.

  1. Egress: Use this when you want to route traffic from your Kubernetes Service out to the Internet. In this case, iptables performs the source NAT, so the traffic appears to be coming from the node and not the Pod.
  2. Ingress: This is the incoming traffic from the external world to Services. Ingress also allows and blocks particular communications with Services using rules for connections. Typically, there are two ingress solutions that function on different network stack regions: the service load balancer and the ingress controller.
Discovering Services

There are two ways Kubernetes discovers a Service:

  1. Environment Variables: The kubelet service running on the node where your Pod runs is responsible for setting up environment variables for each active service in the format {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT. You must create the Service before the client Pods come into existence. Otherwise, those client Pods won't have their environment variables populated.
  2. DNS: The DNS service is implemented as a Kubernetes service that maps to one or more DNS server Pods, which are scheduled just like any other Pod. Pods in the cluster are configured to use the DNS service, with a DNS search list that includes the Pod's own namespace and the cluster's default domain. A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, all Pods can automatically resolve Services by their DNS name. The Kubernetes DNS server is the only way to access ExternalName Services.
ServiceTypes for publishing Services:

Kubernetes Services provide you with a way of accessing a group of Pods, usually defined by using a label selector. This could be applications trying to access other applications within the cluster, or it could allow you to expose an application running in the cluster to the external world. Kubernetes ServiceTypes enable you to specify what kind of Service you want.

Image by:

(Ahmet Alp Balkan, CC BY-SA 4.0)

The different ServiceTypes are:

  1. ClusterIP: This is the default ServiceType. It makes the Service only reachable from within the cluster and allows applications within the cluster to communicate with each other. There is no external access.
  2. LoadBalancer: This ServiceType exposes the Services externally using the cloud provider's load balancer. Traffic from the external load balancer is directed to the backend Pods. The cloud provider decides how it is load-balanced.
  3. NodePort: This allows the external traffic to access the Service by opening a specific port on all the nodes. Any traffic sent to this Port is then forwarded to the Service.
  4. ExternalName: This type of Service maps a Service to a DNS name by using the contents of the externalName field by returning a CNAME record with its value. No proxying of any kind is set up.
Networking software

Networking within Kubernetes isn't so different from networking in the physical world, as long as you understand the technologies used. Study up, remember networking basics, and you'll have no trouble enabling communication between containers, Pods, and Services.

Networking within Kubernetes isn't so different from networking in the physical world. Remember networking basics, and you'll have no trouble enabling communication between containers, Pods, and Services.

Image by:

Opensource.com

Kubernetes Containers What to read next A visual map of a Kubernetes deployment This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux desktops: KDE vs GNOME

opensource.com - Wed, 06/01/2022 - 15:00
Linux desktops: KDE vs GNOME Seth Kenlon Wed, 06/01/2022 - 03:00 1 reader likes this 1 reader likes this

I'm an ardent KDE Plasma Desktop user, but at work I happily use GNOME. Without getting into the question of which desktop I'd take to a desert island (that happens to have a power outlet), I see the merits of both desktops, and I'd rather use either of them than non-open source desktop alternatives.

I've tried the proprietary alternatives, and believe me, they're not fun (it took one over a decade to get virtual workspaces, and the other still doesn't have a screenshot function built in). And for all the collaboration that the KDE and GNOME developers do these days at conferences like GUADEC, there's still a great philosophical divide between the two.

And you know what? That's a good thing.

Missing the tree for the forest

As a KDE user, I'm used to options. When I right-click on an object, whether it's a file, a widget, or even the empty space between widgets, I expect to see at least 10 options for what I'd like to do or how I'd like to configure the object. I like that because I like to configure my environment. I see that as the "power" part of being a "power user." I want to be able to adapt my environment to my whims to make it work better for me, even when the way I work is utterly unique and maybe not even sensible.

GNOME doesn't give the user dozens of options with every right-click. In fact, GNOME doesn't even give you that many options when you go to Settings. To get configuration options, you have to download a tool called Tweaks, and for some you must install extensions.

I'm not a GNOME developer, but I've set up a lot of Linux computers for friends and colleagues, and one thing I've noticed is that everybody has a unique perception of interface design. Some people, myself included, enjoy seeing a multitude of choices readily available at every turn.

Other people don't.

Here's what I see when I right-click on a file in the KDE Plasma Desktop:

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Here's what I see when I right-click on a file in the GNOME desktop:

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Including submenus, my Plasma Desktop has over 30 choices in a right-click. Of course, that's partly because I've configured it that way, and context matters, too. I have more options in a Git repository, for instance, than outside of one. By contrast, GNOME has 11 options in a right-click.

Bottom line: Some users aren't keen to mentally filter out 29 different options so they can see the one option they're looking for. Minimalism allows users to focus on essential and common actions. Having only the essential options can be comforting for new users, a mental relief for the experienced user, and efficient for all users.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Mistake vectors

As a Linux "power user," I fall prey to the old adage that I'm responsible for my own errors. It's the stuff of legend that Linux gives you access to "dangerous" commands and that, should you choose to use them, you're implicitly forgoing your right to complain about the results. For the record, I've never agreed with this sentiment, and I've written and promoted tools that help avoid mistakes in the terminal.

The problem is that mistakes are not planned. If you could plan your mistakes, you could choose not to make them. What actually happens is that mistakes occur when you haven't planned them, usually at the worst possible moment.

One way to reduce error is to reduce choice. When you have only two buttons to press, you can make only one mistake. It's also easier to identify what mistake you've made when there are fewer avenues to take. When you have five buttons, not only can you make four mistakes, but you also might not recall which button out of the five was the wrong one (and the other wrong one, and the other, and so on).

Bottom line: Fewer choices mean fewer mistakes for users.

Maintenance

If you've ever coded anything, this story might seem familiar to you. It's Friday evening, and you have an idea for a fun little improvement to your code. It seems like an easy feature to implement; you can practically see the code changes in your head. You have nothing better to do that evening, so you get to work. Three weeks later, you've implemented the feature, and all it took was a complete overhaul of your code.

This is not an uncommon developer story. It happens because code changes can have unanticipated ripple effects that you just don't foresee before making the change. In other words, code is expensive. The more code you write, the more you have to maintain. The less code you write, the fewer bugs you have to hunt.

The eye of the beholder

Most users customize their desktop with digital wallpaper. Beyond that, however, I expect most people use the desktop they've been given. So the desktop that GNOME and KDE developers provide is generally what people use, and in the end not just beauty but also the best workflow really are in the eye of the beholder.

I fall into a particular work style when I'm using KDE, and a different style of work when I use GNOME. After all, things are arranged in different locations (although I keep my KDE panel at the top of my screen partly to mimic GNOME's design), and the file managers and the layout of my virtual workspaces are different.

It's a luxury of open source to have arbitrary preferences for your tools. There's plenty to choose from, so you don't have to justify what you do or don't like about one desktop or another. If you try one and can't get used to it, you can always switch to the other.

Minimalism with Linux

I used to think that it made sense to use a tool with 100 options because you can just ignore the 95 that you don't need and focus on the five that you do. The more I use GNOME, however, the more I understand the advantages of minimalism. Reduced design helps some users focus on what matters, it helps others avoid confusion and mistakes due to a complex user interface (UI), and it helps developers maintain quality code. And some people just happen to prefer it.

There's a lesson here for users and developers alike, but it's not that one is better than the other. In fact, these principles apply to a lot more than just KDE and GNOME. User experience and developer experience are each important, and sometimes complexity is warranted while other times minimalism has the advantage.

Comparing two open source desktops side by side shows that both styles serve important purposes.

Image by:

Opensource.com

Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Linux 5.19 Adds Support For XSAVEC When Running As A VM Guest

Phoronix - Wed, 06/01/2022 - 12:00
Various hypervisors expose support for the XSAVEC instruction as an XSAVE with compaction as an efficiency optimization. However, the Linux kernel doesn't currently make use of XSAVEC as an alternative to XSAVES (supervisor mode) but that is now changing with Linux 5.19...

NVIDIA's Open-Source Kernel Driver, Graviton3 & Fedora 36 Made For An Exciting May

Phoronix - Wed, 06/01/2022 - 07:00
Word of NVIDIA working on an open-source kernel driver with hopes of eventually being mainlined and being of better quality than Nouveau topped the Linux news for the past month. Plus the introduction of Amazon's new Graviton3 processors, the debut of Fedora 36 and SteamOS 3.2 among other distribution updates, and Linux 5.19 development getting underway all made for an interesting month of May...

LVFS Has Served More Than 52 Million Firmware Files To Linux Users

Phoronix - Wed, 06/01/2022 - 03:00
It was just March of last year that the Linux Vendor Firmware Service (LVFS) served up a total of 25 million firmware downloads to Linux users for updating their system firmware and peripheral devices supporting Fwupd. Just over one year later it has successfully served more than 52 million downloads!..

Intel Announces Rialto Bridge As Ponte Vecchio Successor, Talks Up Falcon Shores & DAOS

Phoronix - Wed, 06/01/2022 - 00:30
Intel is using ISC 2022 this week in Hamburg, Germany to provide an update on their Super Compute Group road-map and the efforts they are pursuing both in hardware and software for a sustainable, open HPC ecosystem.

NVIDIA 515.48.07 Linux Driver Released As Stable With Open Kernel Driver Option

Phoronix - Wed, 06/01/2022 - 00:00
Following the NVIDIA R515 Linux driver beta from earlier this month that was published alongside NVIDIA's open kernel driver announcement, today the NVIDIA 515.48.07 Linux driver has been released as the first R515 stable release...

Pages