Humanoid robots are having a moment. Every major tech conference features new demos—robots walking, grasping, responding to voice commands, and navigating crowded spaces. The hardware is impressive and the AI is advancing rapidly, but what happens after the demo?The answer to this matters because humanoid robots are not just AI systems, they are meant to be long-lived, safety-critical machines that operate continuously in human environments. Unfortunately, the gap between a compelling demonstration and a reliable production deployment is where many robotics programs stall.Red Hat and Intel a
Model Context Protocol (MCP) has moved fast, and thousands of MCP servers now exist across the ecosystem. What started as an open source project from Anthropic in late 2024 is now governed by the Agentic AI Foundation under the Linux Foundation with over 140 member organizations. Red Hat joined the AAIF as a Gold Member earlier this year, alongside the foundation’s work to advance open standards for agentic AI. Earlier this year, the MCP Dev Summit in New York had over 1,200 attendees, gathered to discuss the protocols evolution, and running MCP in production at scale. Thousands of MCP serve
From detection to analysis to remediation, AI is reshaping every layer of IT operations. It can find the problem, write the fix, and run it. But the same AI accelerating your team's capabilities is also accelerating environmental complexity: more signals, more telemetry, and more tools, all moving faster than before. The question enterprises are asking now isn't whether AI can act, but how to ensure its actions are governed, repeatable, and safe.It starts with a simple distinction: knowing what to do and safely doing it are different problems. AI is great at providing the recommendation for th