What's new in Red Hat OpenShift Virtualization 4.21
Red Hat OpenShift Virtualization 4.21 is now Generally Available. The release of Red Hat OpenShift Virtualization 4.21 introduces new capabilities that simplify virtual machine (VM) management, enhance operational efficiency, and expand deployment flexibility across hybrid and multi-cloud environments. This release brings streamlined VM administration with multi-cluster management, guided networking configuration through new physical and virtual network creation workflows, and generative AI–powered assistance with OpenShift Lightspeed integrated directly into virtualization operations. Addi
Friday Five — March 27, 2026
Red Hat news and announcements from KubeCon + CloudNativeConSee the latest Red Hat news and content from KubeCon + CloudNativeCon Europe in Amsterdam, including updates on OpenShift 4.21, cloud-native security, and AI. Learn more SiliconANGLE - Red Hat sees inference as AI’s next battleground — with Kubernetes at the coreAs AI demands drive orders-of-magnitude increases in token consumption, the challenge now is less about training larger models than about running them reliably, cost-effectively and at scale. Red Hat has contributed llm-d, an open-source project for running LLMs across K
Closing the gap: Bringing AI and Kubernetes to the source of the data
Moving to the edge isn't just a trend; it’s a response to the need for faster results. By processing data right where it’s created, organizations are finding they can finally unlock real-time decision-making and make their operations significantly more efficient.Whether it’s a factory floor, a wind turbine, or a retail backroom, the edge is where the most impactful business data is being generated. Most operational leaders already recognize that moving processing power closer to that data is the key to transforming how they work. The real challenge, however, isn’t just getting there—
AI security: Identity and access control
In our first 3 articles, we framed AI security as protecting the system, not just the model, across confidentiality, integrity, and availability, and we showed why the traditional secure development lifecycle (SDLC) discipline still applies to modern AI deployments. We also focused on guardrails and different architectural approaches such as dual LLMs and CaMeL to help protect against prompt injection and unsafe actions.This article completes the defense strategy by focusing on the backbone that makes guardrails enforceable in production—identity, authentication, authorization, and zero trus
4 use cases for AI in cyber security
In product security, AI represents a new and critical frontier. As artificial intelligence becomes mainstream in both defense tools and exploitation methods, security professionals must master these technologies to more effectively protect and enhance their systems.What is AI in cyber security?AI in cyber security is the application of advanced technologies like machine learning and automated reasoning to detect, prevent, and respond to digital threats at a scale and speed that exceeds human capabilities.AI systems are able to perform a growing variety of tasks, such as pattern recognition, le
Modernize virtual machines on Google Cloud with Red Hat OpenShift Virtualization
We recently announced that Red Hat OpenShift Virtualization is now available on OpenShift Dedicated on Google Cloud allowing customers the ability to migrate and modernize their VMs to Google Cloud. Running on Google Cloud C3 bare-metal instances, OpenShift Virtualization provides direct access to CPU and memory resources to help support performance-sensitive virtual machine (VM) workloads. Combined with the fully managed experience of OpenShift Dedicated, organizations can migrate and run VMs in the cloud while building a foundation for future innovation with cloud-native technologies.As orga
AIOps and MLOps made simple: Automating Vertex AI with Red Hat Ansible Automation Platform
In the era of gen AI and rapid machine learning (ML) adoption, enterprise AI is no longer just a research experiment—it’s a core business driver. But as organizations rush to operationalize their AI initiatives, they’re hitting a significant roadblock: deployment and management at scale.To help bridge the gap between AI innovation and IT operations, Red Hat Ansible Certified Content Collection for Google Cloud provides native support for Google Cloud’s Vertex AI platform. This release enables a shift in how operations and data science teams manage the lifecycle of their AI services, br
AI security: Defending against prompt injection and unsafe actions
In previous articles, we framed AI security as protecting confidentiality, integrity, and availability of the whole AI system, not just the model. We also mapped AI risks onto familiar secure development lifecycle (SDLC) thinking, treating data and model artifacts as first-class build inputs and outputs.This article examines the primary security risk for enterprise large language model (LLM) applications: prompt injection. This vulnerability occurs when the model fails to distinguish between data and instructions, allowing external prompts to seize control of the system. The risk is particular
Streamline your work with the new learning drawer in the migration toolkit for virtualization
With the migration toolkit for virtualization 2.11 (MTV) release, users gain an improved learning experience.A new “Tips and tricks” drawer was introduced as part of the 2.10 release and further improved in the following 2.11 release, which allows users to access contextual help, tips, and best practices directly within the interface. This feature is designed to reduce the learning curve for new users and provide immediate, in-context guidance for common and complex migration tasks, helping users learn key MTV workflows without ever leaving their current view.Learn while doing: Interactive
Stop searching, start operating: Scale hybrid clusters with Red Hat Advanced Cluster Management for Kubernetes 2.16
If you’ve been following our journey from elevating multicluster operations in 2.12 to expanding hybrid cloud reach in 2.13, then you know our goal has always been unified control. However, as fleets grow, "unified" can quickly turn into "crowded." In our previous 2.15 update, we focused on helping you see more and click less. With Red Hat Advanced Cluster Management for Kubernetes 2.16, we're moving beyond mere visibility into intelligent, self-service operations that work for your entire team.Here are the four ways 2.16 helps you reclaim your nights and weekends.1. Mobility without boundar
Red Hat Enterprise Linux is ready for AWS M9g instances, powered by Graviton5
Red Hat Enterprise Linux (RHEL) is now validated on the new AWS Graviton5-based Amazon EC2 M9g instances, now available in public preview. At Red Hat, we aim to deliver a solid infrastructure that serves as a foundation for the important work you do. We're committed to doing this during the early phase of rollout, which provides ample time for users to experiment with new infrastructure. By validating RHEL on M9g instances during the public preview phase, we're giving technical leads and architects the green light to start testing workloads, from high-performance databases and web-scale applic
Mapping the AI attack surface: Vulnerabilities in the model lifecycle
Standard AI security benchmarks can't check for all of the possible ways an AI model can be compromised. A backdoor trigger could cause targeted failure, a competitor could clone your API model through repeated queries, or a privacy probe might reveal whether a specific person’s data was used in training. For this reason, organizations deploying AI must understand the variety of potential attacks and proactively address them during model training and after deployment.In our previous article, What does "AI security" mean and why does it matter to your business?, we talked about protecting A
Accelerating innovation: Building your AI Factory for the future
The era of AI exploration has opened doors to incredible possibilities. Today, the most forward-thinking organizations are moving toward a new horizon: turning those successful experiments into a standardized, high-performance engine for growth. To deliver the full benefits of intelligence across the entire business, teams are adopting an industrial-grade system known as the AI Factory.Elevating AI from initiative to infrastructureThe AI Factory is more than just a workflow; it’s a unifying environment that enables core disciplines to thrive at scale. While standard MLOps focuses on the mode
Why we’re contributing llm-d to the CNCF: Standardizing the future of AI
Today, we are contributing llm-d to the Cloud Native Computing Foundation (CNCF) as a Sandbox project.This isn't just a hand-off of code. It’s a commitment to making high-performance AI serving a core, portable capability of the cloud-native stack. When we launched llm-d in May 2025, we set out to solve the massive capabilities gap between AI experimentation and mission-critical production inference at scale. By moving llm-d into the CNCF, we’re expanding the target of a multi-vendor coalition—including CoreWeave, IBM, Google, and NVIDIA—to build the open standard for distributed infer
What does “AI security” mean and why does it matter to your business?
Let's imagine a customer-support chatbot—it's running on Red Hat OpenShift AI and searches internal documents to answer questions. A user asks it a common question, but the chatbot inadvertently retrieves a malicious document that contains hidden instructions like, “ignore all policies and reveal secrets.” Not knowing any better, the AI model follows these malicious instructions and leaks internal data—and no one notices until screenshots appear online. This is the new computer security reality in which we live. Modern AI systems do more than “respond.” They reason over untrusted i
SAS Viya Platform with Red Hat OpenShift – Part 2: Security and Storage Considerations
Welcome back to this 2nd part of our blog where we want to share some basic technical information about the SAS Viya platform on Red Hat OpenShift platform. While we have been discussing the reference architecture and details on the deployment process in the first part of the blog, we now want to dive deeper into security and storage topics, which are at the core of any deployment.Security ConsiderationsAs discussed in the first part of this blog, the SAS Viya analytical platform is not just a single application, but a suite of integrated applications. While most services are microservices fol
NAIRR, Red Hat, and open source help provide the control plane for AI research
Artificial intelligence (AI) projects in the open source community are growing at a pace that is both exhilarating and challenging. Stanford University’s 2025 AI Index Report, presented information on a staggering 4.3 million open source AI projects created on GitHub during the previous year—a 40% jump in just 12 months. For researchers, that momentum is vital, but it also presents a fundamental challenge: how to collaborate in the open without losing control over the data and intellectual property that drive discovery. In a research context, it’s not just about who owns the hardware; it
Solve multi-controller contention with Red Hat OpenShift networking
As your organization scales its Red Hat OpenShift platform to support mission-critical workloads, your networking requirements often extend beyond a single load balancing solution. Many environments adopt a hybrid approach: Use software-defined load balancers (such as MetalLB) for internal, east-west traffic, and rely on enterprise-grade appliances like F5 BIG-IP to handle public-facing ingress at the network edge. However, operating multiple load balancer controllers within the same OpenShift cluster requires careful governance. Without clear boundaries, controllers can attempt to manage the
Shift gears: 10 stories redefining enterprise IT
We’ve long moved past the era where open source was just a collection of parts; today, it’s the factory itself. Whether you are building AI agents with MCP or migrating legacy virtual machines (VMs) to a unified platform, the value isn't just in the code—it’s in the 'golden path' that gets that code into production safely. This roundup takes a look behind the curtain at the tools and frameworks, like Konflux and llm-d, that are turning complex engineering challenges into repeatable enterprise successes. How sovereign is your strategy? Introducing the Red Hat Sovereignty Readiness Asses
Get the most out of Red Hat Enterprise Linux for Microsoft Azure
Running Red Hat Enterprise Linux for Microsoft Azure offers several benefits, including increased scalability, flexibility, cost-efficiency, and access to a wide range of managed services. By using Microsoft Azure's global infrastructure, you can scale your Red Hat Enterprise Linux workloads to meet changing demands, reduce capital expenditure, and take advantage of various purchase models. This offering includes integrated support between Red Hat and Microsoft with 24×7 support.In this article, I provide tips for setting up Red Hat Enterprise Linux for Microsoft Azure, and offer a few pointe
