Friday Five — March 20, 2026
Explore the latest Red Hat + NVIDIA announcements from NVIDIA GTCExplore the latest Red Hat + NVIDIA announcements shaping the future of scalable enterprise AI. Check out all of Red Hat's news from NVIDIA GTC in the newsroom. Learn more The new AI stack: Choice, control, and production-ready innovationLearn how Red Hat's open approach to AI, grounded in open source principles, offers choice, flexibility and control for CIOs and CTOs. Read Ashesh Badani's thoughts on this new wave of technological change and receive complimentary access to a Forrester report. Learn more theCUBE - KubeCon + Clou
Optimizing cluster observability: A strategic approach to selective log routing in Red Hat OpenShift
As Red Hat OpenShift clusters scale to support hundreds of microservices, the sheer volume of telemetry data can become overwhelming. Platform architects often face a difficult paradox: Maintain visibility required for security and compliance while also managing rising storage costs and "noise" associated with high-volume infrastructure logs. In this article, I explore how to leverage the ClusterLogForwarder (CLF) API and Loki filters in Red Hat OpenShift to move from a "collect everything" model to a route-by-value strategy.Infrastructure metadata and application insightIn a standard OpenShif
Announcing Ansible Automation Platform 2.4 end of Maintenance Support
Red Hat Ansible Automation Platform 2.4 reaches its end of Maintenance Support on June 30, 2026. Originally launched in June 2023, Ansible Automation Platform 2.4 was a major milestone in the automation landscape, introducing Event-Driven Ansible to the platform for the first time.Why Maintenance Support mattersWhen a version of Red Hat software reaches the end of Maintenance Support, it means Red Hat no longer actively develops it. For your organization, that means:No new critical fixes: Routine bug fixes and security patches (CVEs) are no longer backported. Support limitations: While you can
The efficient enterprise: Scaling intelligence with Mixture of Experts
As organizations scale generative AI (gen AI) across business units, a familiar tension appears—bigger models can often deliver better results, but they also require significantly more compute, cost, and operational complexity. This creates a production paradox— while enterprises want higher-quality reasoning, domain specialization, and agentic autonomy, they struggle to deploy monolithic trillion-parameter models that run continuously across clusters.As a result, the industry is shifting strategies, moving from single, massive models toward more efficient architectures. One of those techn
Introducing OpenShift Service Mesh 3.3 with post-quantum cryptography
Red Hat OpenShift Service Mesh 3.3 is now generally available with Red Hat OpenShift Container Platform and Red Hat OpenShift Platform Plus. Based on the Istio, Envoy, and Kiali projects, this release updates the version of Istio to 1.28 and Kiali to 2.22, and is supported on OpenShift Container Platform 4.18 and above. While this release includes many updates, it also sets the stage for the next generation of service mesh features, including post-quantum cryptographic (PQC) encryption, AI enablement, and support for the inclusion of external virtual machines (VMs) with service mesh.Updates in
Generate a no-cost VMware migration-readiness report with the OpenShift migration advisor
The decision to migrate from a familiar virtualization provider to a modern platform is significant. For many organizations, the biggest hurdle isn't necessarily the migration itself, but the uncertainty that precedes it. You need to know exactly what the migration process looks like, what risk is involved, which workloads are ready, and more before you commit your resources.The OpenShift migration advisor is a no-cost, self-service tool designed to evaluate the migration-readiness of your VMware workloads prior to commitment. By discovering your vCenter environment, the advisor generates an a
Red Hat and NVIDIA collaborate for a more secure foundation for the agent-ready workforce
In already a few short years, AI technology has evolved from basic chat completions to autonomous, long-running agents. This poses a challenge for IT teams who need to enable their builders to innovate while also providing guardrails and controls to reduce enterprise risk. More than just chatbots or assistants, agents are now autonomous entities capable of operating over extended horizons, crafting their own sub-agents, and using professional tools to complete multi-step plans. But as agents leave the developer's laptop and start interacting with production data and external APIs, freedom wit
Operationalizing "Bring Your Own Agent" on Red Hat AI, the OpenClaw edition
The AI agent world is messy. Teams are reaching for LangChain, LlamaIndex, CrewAI, AutoGen, or building custom solutions from scratch. Good. That's how it should be during the creative phase. But once an agent leaves a developer's laptop and starts talking to production data, calling external application programming interfaces (APIs), or running on shared infrastructure, freedom without guardrails stops being a feature and starts being a liability.We've watched the industry go through waves: Model APIs (such as chat completions), agentic APIs (such as assistants and later the OpenAI responses
Building the hybrid AI factory of the future: Red Hat achieves AI Cloud Ready status for the NVIDIA Cloud Partner (NCP) program
Navigating the complexities of AI infrastructure shouldn’t be a barrier to innovation. Red Hat has completed the first phase of AI Cloud Ready status for the NVIDIA Cloud Partner (NCP) program to help address the increasing complexities of AI. NCPs build and operate GPU accelerated AI platforms to deliver and support full-stack, AI-optimized offerings based on the NCP software reference guide. This reference architecture is a proven blueprint for the full stack, including GPU servers, networking, storage, and software, to enable NCPs to deliver AI capacity as reliable, consistent services in
Bringing Nemotron models to the Red Hat AI Factory with NVIDIA
Following the successful launch of the Red Hat AI Factory with NVIDIA, Red Hat is pleased to announce the latest update in our collaboration with NVIDIA – delivering Day 0 support for the NVIDIA Nemotron open model family on the Red Hat AI Factory with NVIDIA. With this effort, we are providing a fully optimized, open source pathway for enterprise-grade generative AI.From infrastructure to intelligence: Accelerating AI mainstream enterprise adoptionThe Red Hat AI Factory with NVIDIA was designed to provide a turnkey environment for developing and deploying AI at scale. Today’s announcement
Accelerate enterprise software development with NVIDIA and Model-as-a-Service (MaaS) on Red Hat AI
Developing software as efficiently and swiftly as possible is a competitive necessity. The faster and sooner you can get new products to market, the greater advantage you have with your customers. In recent years, AI coding has become a compelling way to help solve these challenges by handling tedious, repetitive tasks and debugging and testing more quickly. This frees up valuable time for higher-impact development work.However, the rapid adoption of generative AI-powered coding has introduced new enterprise-level challenges. As organizations scale their use of AI tools, they confront critical
The new AI stack: Choice, control, and production-ready innovation
In the next decade, AI will redraw the map of technology ecosystems. As we traverse what Forrester is calling the "seventh wave" of major technological change—driven by generative and agentic AI—C-suite executives are facing a daunting transition. The difference between falling behind and harnessing this wave of change is your strategy for the AI computing stack.At Red Hat, our mission remains centered on open source principles: Collaboration, transparency, and choice. We believe that for AI to truly deliver on its promise of productivity and business value, it cannot remain a proprietary
Subscription watch: Managing your hybrid cloud estate
Managing a hybrid cloud environment spanning on-premise data centers, edge deployments, and multiple public clouds often results in subscription sprawl. Even in simpler environments, it can be challenging to maintain clear visibility into subscription use. Organizations frequently struggle to answer a basic question: “Exactly how much of our purchased Red Hat capacity are we actually using right now?”Subscription watch is the solution to this complexity. It is a Software-as-a-Service (SaaS) tool integrated into Red Hat Hybrid Cloud Console that provides a unified, aggregated view of your
Friday Five — March 13, 2026
vLLM Semantic Router: Signal driven decision routing for mixture-of-modality modelsAs LLMs diversify across modalities, capabilities and cost profiles, the problem of intelligent request routing—selecting the right model for each query at inference time—has become a critical systems challenge. Red Hat is collaborating in the upstream community to deliver vLLM Semantic Router, a signal-driven decision routing framework for Mixture-of-Modality (MoM) model deployments. Learn more Techstrong.ai - Red Hat Extends AI Reach Deeper into the EnterpriseRed Hat is delivering a stable, reliable foun
Enable intelligent insights with Red Hat Satellite MCP Server
Red Hat Satellite manages Red Hat Enterprise Linux (RHEL) systems at scale across the cloud and on-premises. Last year, a model context protocol (MCP) server for Red Hat Satellite was released as a Technology Preview feature to enable more intelligent and automated management of Satellite and RHEL systems through your favourite large language model (LLM).LLMs make it possible to perform highly automated and sophisticated tasks. An LLM can enable automatic, unsupervised problem solving, simulating the acts of perception, learning, and reasoning. Tools such as MCPs make it possible for LLMs to o
Scaling Enterprise Federated AI with Flower and Open Cluster Management
Federated AI inverts the traditional machine learning paradigm. Instead of bringing data to the model, it brings the model to the data. Training happens locally on distributed nodes (i.e., hospitals, banks, and edge devices), and only model updates are shared with a central coordinator. The raw data never leaves its source. We will discuss this approach and how it enables collaborative AI while addressing privacy regulations (i.e., GDPR-EU data protection and HIPAA-US healthcare privacy) and data sovereignty requirements critical for healthcare, finance, and cross-border deployments. In this p
Safe data discovery with EDB's Data Governance Co-Pilot AI quickstart
When Red Hat revealed our AI quickstarts, EDB suggested a use case to balance the business need for data with the non-negotiable demand for governance. We often treat this as a zero-sum game, but what if the architecture itself could negotiate peace?This Data Governance Co-Pilot AI quickstart, built on Red Hat OpenShift AI and EDB Postgres AI (PGAI) platform, treats safe data discovery as a requirement. It provides a protected workspace where any data consumer can navigate complex schemas and extract insights with less risk of tripping compliance wires.Retrieval-augmented generation (RAG) with
Red Hat Summit 2026 session catalog now available
Red Hat Summit 2026 arrives in Atlanta in two months! If you’re joining us May 11-14 at the Georgia World Congress Center, you can begin planning your week of keynotes, product roadmaps, lightning talks, power trainings, labs, breakout sessions, social events, and more using the session catalog and agenda builder. If you haven’t registered yet – don’t worry! There’s still time to submit your registration and confirm your spot at Red Hat Summit. The Red Hat Summit 2026 session catalog details hundreds of compelling sessions and labs focused on today’s leading tech topics – AI, vir
Fedora 44 Beta now available
Today, the Fedora Project is excited to announce that the beta version of Fedora Linux 44 - the latest version of the free and open source operating system - is now available. Learn more about the new and updated features of Fedora 44 Beta below and don’t forget to make sure that your system is fully up-to-date before upgrading from a previous release.What’s new in Fedora 44 Beta?Installer and desktop ImprovementsGoodbye Anaconda created default network profiles: This change will impact how Anaconda populates network device profiles so that only devices configured during installation – b
AI quickstart: Protecting inference with F5 Distributed Cloud and Red Hat AI
Earlier this year, we launched the Red Hat AI quickstart catalog, a collection of ready-to-run blueprints designed to help organizations move from talking about AI to using large language models (LLMs) to solve real-world problems. This provides systems integrators and architects with example AI solutions that Red Hat engineering has tested and streamlined for easy deployment.Once you've successfully rolled out an interactive solution on Red Hat AI, however, the next question is usually, "How do I protect this in the real world?"To help answer this, we've expanded the AI quickstarts catalog wi
