The upcoming Linux 7.1 kernel cycle is set to retire UDP-Lite support. The UDP-Lite protocol allowed for partial checksums where potentially damaged/corrupted packets are still delivered to the application. Since the Linux 2.6.20 days there has been UDP-Lite support but the kernel is now set to retire it given breakage that has persisted for years and cleaning up the networking code can yield a performance advantage for non-UDP-Lite users...
A four year old optimization idea for the RADV driver was scratched off the TODO list last week for next quarter's Mesa 26.1 release...
Last week yet more AMDGPU kernel graphics driver updates were submitted to DRM-Next ahead of the Linux 7.1 merge window happening in April...
In already a few short years, AI technology has evolved from basic chat completions to autonomous, long-running agents. This poses a challenge for IT teams who need to enable their builders to innovate while also providing guardrails and controls to reduce enterprise risk. More than just chatbots or assistants, agents are now autonomous entities capable of operating over extended horizons, crafting their own sub-agents, and using professional tools to complete multi-step plans. But as agents leave the developer's laptop and start interacting with production data and external APIs, freedom wit
The AI agent world is messy. Teams are reaching for LangChain, LlamaIndex, CrewAI, AutoGen, or building custom solutions from scratch. Good. That's how it should be during the creative phase. But once an agent leaves a developer's laptop and starts talking to production data, calling external application programming interfaces (APIs), or running on shared infrastructure, freedom without guardrails stops being a feature and starts being a liability.We've watched the industry go through waves: Model APIs (such as chat completions), agentic APIs (such as assistants and later the OpenAI responses
Navigating the complexities of AI infrastructure shouldn’t be a barrier to innovation. Red Hat has completed the first phase of AI Cloud Ready status for the NVIDIA Cloud Partner (NCP) program to help address the increasing complexities of AI. NCPs build and operate GPU accelerated AI platforms to deliver and support full-stack, AI-optimized offerings based on the NCP software reference guide. This reference architecture is a proven blueprint for the full stack, including GPU servers, networking, storage, and software, to enable NCPs to deliver AI capacity as reliable, consistent services in
Following the successful launch of the Red Hat AI Factory with NVIDIA, Red Hat is pleased to announce the latest update in our collaboration with NVIDIA – delivering Day 0 support for the NVIDIA Nemotron open model family on the Red Hat AI Factory with NVIDIA. With this effort, we are providing a fully optimized, open source pathway for enterprise-grade generative AI.From infrastructure to intelligence: Accelerating AI mainstream enterprise adoptionThe Red Hat AI Factory with NVIDIA was designed to provide a turnkey environment for developing and deploying AI at scale. Today’s announcement
Pages