It was an eventful past month with Valve announcing the new Steam Machine, a lot of new Linux kernel activity, the continued increase of Rust programming language adoption by open-source projects, a lot of fun hardware benchmarks, and more. There were 283 original news articles on Phoronix the past month about Linux/open-source software and hardware plus another 18 featured Linux hardware reviews / multi-page benchmark articles. Here is a look back at the most popular content over the past month...
Last month, we launched Red Hat Ansible Automation Platform 2.6, and introduced several new features including an automation dashboard, a self-service automation portal, and the Ansible Lightspeed intelligent assistant. We hosted a follow-up webinar, What’s new with Ansible Automation Platform 2.6, during which we received some great questions from the audience about how to install, migrate, and upgrade to the latest version. To help you prepare for and navigate the Ansible Automation Platform 2.6 release, we've compiled the top questions and their answers.Installations, upgrades, and migrat
Confidential computing is needed to protect sensitive data not only when it is stored or transmitted, but also while it is actively being processed in memory - traditionally the most vulnerable phase. In this article, I demonstrate how to implement a secure runtime environment using AWS Nitro Enclaves for applications on EC2 instances running Red Hat Enterprise Linux 9.6+ (RHEL).To fully understand the concepts, use cases, and justifications for confidential computing, read our previous articles. The hardware used to provide secure communication and certification is based on AWS Nitro architec
As organizations race to productionize large language model (LLM) workloads, two powerful open-source projects have emerged to tackle the complexity of inference at scale: vLLM and llm-d.Are llm-d and vLLM on the same track, or are they steering toward different finishing lines?vLLM: The High-Performance Inference EnginevLLM is an enterprise open-source based inference engine for LLMs. Its performance edge comes from innovations like:PagedAttention, which enables efficient KV cache managementSpeculative decoding supportTensor parallelism (TP) and multi-model supportIntegration with Hugging Fac