Open-source News

New Linux Patches To Expose AMD Ryzen AI NPU Power Metrics

Phoronix - Tue, 11/11/2025 - 19:22
New Linux kernel patches currently undergoing review will allow AMD Ryzen AI NPU power metrics to be exposed under Linux. In turn this is useful for helping to gauge the utilization of the neural processing unit and also helping to evaluate the actual power efficiency of leveraging the AMD Ryzen AI NPU...

Intel Xe Linux Driver Working Toward UALink & High Speed Fabrics Support

Phoronix - Tue, 11/11/2025 - 19:08
The YouTube video recordings for the X.Org Developers' Conference 2025 that took place at the end of September in Austria are finally available. Among the many interesting XDC2025 presentations was Intel engineer Matthew Brost talking about the GPU Shared Virtual Memory (SVM) within Intel's modern Xe kernel graphics driver...

SDL3 Now Implements Render Batching For Direct3D, Metal & Vulkan

Phoronix - Tue, 11/11/2025 - 18:45
The SDL3 library that is popular with cross-platform games for abstracting various software/hardware features has implemented render batching for its built-in rendering API. This render batching is successfully wired up now for Direct3D 11/12, Apple Metal, and Vulkan APIs for more efficient graphics rendering...

AMD Posts New "amd_vpci" Accelerator Driver For Linux

Phoronix - Tue, 11/11/2025 - 09:47
While there is already AMDXDNA as one of the few currently mainline drivers in the accelerator "accel" subsystem for supporting AMD Ryzen AI NPUs, another AMD accel driver is on the way: amd_vpci. The new amd_vpci driver patches were posted today for review as AMD continues to further expand their diverse offerings in the ecosystem...

The strategic shift: How Ford and Emirates NBD stopped paying the complexity tax for virtualization

Red Hat News - Tue, 11/11/2025 - 08:00
For most large-scale enterprises today, the hybrid cloud isn't a strategy, it’s just the reality. Most organizations are running in both worlds: they have modern, cloud-native applications in containers, and critical, often mission-critical, systems in virtual machines (VMs).The reality is that running two separate virtualization stacks creates silos, complexity, and unnecessary operational cost – what can be called the complexity tax. It slows down your operations and application teams, strains budgets, and ultimately makes it harder to deliver value to the business.We recently spoke with

Red Hat OpenShift 4.20: Expanded Oracle cloud infrastructure support

Red Hat News - Tue, 11/11/2025 - 08:00
Red Hat OpenShift 4.20 brings significant expansion of support across Oracle's diverse cloud infrastructure services. This enhancement delivers OpenShift's enterprise-grade container platform to additional Oracle cloud services, providing your organization with greater flexibility and choice in your deployment strategy.OpenShift 4.20 introduces support for five new Oracle cloud infrastructure services:General Availability:EU Sovereign Cloud: Full production support for organizations requiring data sovereignty and regulatory compliance within European bordersTechnology Preview:Oracle US Governm

Red Hat OpenShift 4.20 accelerates virtualization and enterprise AI innovation

Red Hat News - Tue, 11/11/2025 - 08:00
Red Hat OpenShift 4.20 is now generally available. It's based on Kubernetes 1.33 and CRI-O 1.33 and, together with Red Hat OpenShift Platform Plus, this release underscores our commitment to provide a trusted, comprehensive, and consistent application platform. On OpenShift, AI workloads, containers, and virtualization seamlessly co-exist, enabling enterprises to innovate faster across the hybrid cloud, without compromising on security.Available in self-managed or fully managed cloud service editions, OpenShift offers an application platform with a complete set of integrated tools and services

A deeper look at post-quantum cryptography support in Red Hat OpenShift 4.20 control plane

Red Hat News - Tue, 11/11/2025 - 08:00
The age of quantum computing is on the horizon, and with its immense processing power comes a significant threat to the cryptographic foundations of our digital world. In this article, we'll explore the emerging support for post-quantum cryptography (PQC) in Red Hat OpenShift 4.20, focusing on how it enhances the core components of the Kubernetes control plane: the apiserver, kubelet, scheduler, and controller-manager. Missing is etcd, using an older version of Go.The quantum threatToday's widely used public-key cryptosystems, such as RSA and elliptic curve cryptography (ECC), form the foundat

KServe joins CNCF as an incubating project

Red Hat News - Tue, 11/11/2025 - 08:00
We are excited to share that KServe, the leading standardized AI inference platform on Kubernetes, has been accepted as an incubating project by the Cloud Native Computing Foundation (CNCF).This milestone validates KServe’s maturity, stability and role as the foundation for scalable, multi-framework model serving in production environments. By moving into the CNCF’s neutral governance, KServe’s development will be driven purely by community needs, accelerating its standardization for serving AI models on Kubernetes.For Red Hat this is a validation of our commitment to delivering open, re

Create efficient two-node edge infrastructure with Red Hat OpenShift and Portworx/Pure Storage

Red Hat News - Tue, 11/11/2025 - 08:00
The demand to extend applications to the edge has never been greater. From retail shops to industrial and manufacturing sites, there's a need to create, consume, and store data at the edge. Deploying applications at the edge comes with a set of physical constraints, but also with the need to deliver a truly cost-efficient and resilient architecture. When building applications at the edge, you must consider the needs of the individual site as well as the cost to deploy, manage, and maintain applications across multiple edge locations.The good news is that Red Hat OpenShift is evolving to meet t

Bringing intelligent, efficient routing to open source AI with vLLM Semantic Router

Red Hat News - Tue, 11/11/2025 - 08:00
The speed of innovation in large language models (LLMs) is astounding, but as enterprises move these models into production, the conversation shifts - it’s no longer just about raw scale; it’s about per-token efficiency and smart, targeted compute use.Simply put, not all prompts require the same level of reasoning. If a user has a simple request, like, "What is the capital of North Carolina?" a multi-step reasoning process required for say, a financial projection, isn’t necessary. If organizations use heavyweight reasoning models for every request, the result is both costly and inefficie

Pages