Open-source News

GNU gettext Reaches Version 1.0 After 30+ Years In Development - Adds LLM Features

Phoronix - Thu, 01/29/2026 - 09:31
Sun Microsystems began developing gettext in the early 1990s and the GNU Project began GNU gettext development in 1995 for this widely-used internationalization and localization system commonly for multi-lingual integration. While GNU gettext is commonly used by countless open-source projects and adapted for many different programming languages, only an hour ago was GNU gettext 1.0 finally released...

How Banco do Brasil uses hyperautomation and platform engineering to drive efficiency

Red Hat News - Thu, 01/29/2026 - 08:00
At the recent OpenShift Commons gathering in Atlanta, we had the opportunity to hear from Gustavo Fiuza, IT leader, and Welton Felipe, DevOps engineer, about the remarkable digital transformation at Banco do Brasil. As the second-largest bank in Latin America, they manage a massive scale, serving 87 million customers and processing over 900 million business transactions daily. We learned how they evolved from a siloed community Kubernetes environment to a highly efficient, hybrid multicloud platform powered by Red Hat OpenShift. Scalability through capabilities and hyperautomationA primary tak

From if to how: A year of post-quantum reality

Red Hat News - Thu, 01/29/2026 - 08:00
For the last 5 years, post-quantum cryptography (PQC) has largely been discussed as a research topic. It was a question of if—if the standards are ratified, if the algorithms perform, if the threat is real.In 2025, Red Hat changed the conversation. We stopped asking “if” and started defining “how.” This past year, we moved PQC out of the laboratory and into the operating system (OS). It wasn’t just about upgrading libraries, it was about pushing the entire modern software supply chain. We found that while the foundation is ready, the ecosystem has a long way to go.Here is the story

Context as architecture: A practical look at retrieval-augmented generation

Red Hat News - Thu, 01/29/2026 - 08:00
In a previous article, The strategic choice: Making sense of LLM customization, we explored AI prompting as the first step in adapting large language models (LLMs) to real-world use. Prompting changes how an AI model responds in terms of tone, structure, and conversational behavior without changing what the model knows.That strategy is effective until the model requires specific information it did not encounter during its initial training.At that point, the limitation is no longer conversational—it is architectural.Retrieval-augmented generation (RAG) helps address that limitation. Not by ma

Pages