Most leaders I speak with are well past the hype cycle of AI. The question is no longer whether AI matters. The question is how to move from experimentation to production in a way that is security-focused, supportable, and repeatable across teams.From where I sit—leading strategy and operations for AI Platform Core Components (AIPCC), an engineering function within Red Hat’s AI Engineering organization—that shift changes everything. The conversation moves from a tooling decision to an operating model decision. A strong AI platform is the foundation that helps teams ship AI-enabled capabi
Red Hat is proud to announce our strong results from the latest industry-standard MLPerf Inference v6.0 benchmark. Our submission includes four AI workloads (Whisper-Large-v3, GPT-OSS-120B, Qwen3-VL-235B-A22B, and Llama-2-70b) on NVIDIA (H200, B200, L40S) and AMD (MI350X) GPUs, running on Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift AI with our open source inference stack: vLLM, and llm-d. We achieved top scores across several configurations, including the highest offline throughput on B200 for GPT-OSS-120B, the leading H200 result on Whisper, and the top B200 submission on Qwen3-VL,
After KDBUS failed to make it into the mainline Linux kernel more than one decade ago as an in-kernel version of D-Bus, BUS1 was proposed as a clean sheet design for in-kernel, capability-based inter-process communication (IPC). BUS1 didn't gain enough traction to make it to the mainline kernel and then many of the same developers devised Dbus-Broker as a more performant D-Bus user-space implementation. Well, as a big surprise now, a new version of BUS1 is being worked on for the Linux kernel...