Open-source News

GraalVM CE 22.1 Released With Performance Improvements, Apple Silicon Support

Phoronix - Tue, 04/26/2022 - 21:00
Oracle this morning published the GraalVM Community Edition 22.1 feature release for this high-performance Java/JDK distribution that also provides runtimes for JavaScript, Python, and other languages...

Linux 5.19 Looks Like It Will Be The Base Requirement For Intel Arc Graphics / Alchemist

Phoronix - Tue, 04/26/2022 - 19:13
While Intel launched the Arc A-Series Mobile Graphics at the end of Q1, so far at least in major US markets no laptops with these graphics are currently available. As such it's hard to assess the current Linux driver support level and with no clear communication from Intel on the matter. Intel has been working on their upstream DG2/Alchemist support for a while but it looks like with the Linux 5.19 kernel this summer is what will likely be their base version requirement for the DG2/Alchemist-based Intel GPUs...

Concerns Raised Over The "New" NTFS Linux Driver That Merged Last Year

Phoronix - Tue, 04/26/2022 - 18:25
Back in 2020 file-system driver provider Paragon Software announced they wanted to upstream their NTFS driver into the Linux kernel. This driver was previously a proprietary, commercial offering from the company but given the state of NTFS these days they wanted to upstream this driver with full read/write support and other features not found within the existing NTFS driver. Finally last year after going through many rounds of review, the new driver was merged into Linux 5.15. Sadly, less than one year later, concerns have been raised that the driver is already effectively orphaned and not being maintained...

Arm Scalable Matrix Extension Readied Ahead Of Linux 5.19

Phoronix - Tue, 04/26/2022 - 17:52
It looks like Linux 5.19 will have all the base preparations in place for Arm Scalable Matrix Extension (SME) support...

/dev/random + /dev/urandom Unification May Be Revisited In The Future, Blocker Addressed

Phoronix - Tue, 04/26/2022 - 17:30
Originally attempted with Linux 5.18 were patches so /dev/urandom and /dev/random would behave exactly the same. That was dropped though due to not enough randomness at boot for some platforms like Arm 32-bit, Motorola m68k, Microblaze, Xtensa, and others. But then the change went in to opportunistically initialize /dev/random as a best-effort approach where it at least works nicely on x86/x86_64. The good news is that original unification effort may be re-visited in the future now that the original blocker issue has been addressed...

VMware Lands SVGAv3 In Mesa 22.2 For Their Virtual Graphics Device

Phoronix - Tue, 04/26/2022 - 17:09
VMware has merged support for SVGAv3 into Mesa 22.2. SVGAv3 is the latest update to their virtual graphics device for allowing 3D guest virtual machine acceleration with VMware's virtualization products...

How open source and cloud-native technologies are modernizing API strategy

opensource.com - Tue, 04/26/2022 - 15:00
How open source and cloud-native technologies are modernizing API strategy Javier Perez Tue, 04/26/2022 - 03:00 Up Register or Login to like.

I recently had the opportunity to speak at different events on the topic of API strategy for the latest open source software and cloud-native technologies, and these were good sessions that received positive feedback. In an unusual move for me, on this occasion, I put together the slides first and then the article afterward. The good news is that with this approach, I benefited from previous discussions and feedback before I started writing. What makes this topic unique is that it’s covered not from the usual API strategy talking points, but rather from the perspective of discussing the latest technologies and how the growth of open source software and cloud-native applications are shaping API strategy.

I'll start by discussing innovation. All the latest software innovations are either open source software or based on open source software. Augmented reality, virtual reality, autonomous cars, AI, machine learning (ML), deep learning (DL), blockchain, and more, are technologies that are built with open source software that use and integrate with millions of APIs.

Software development today involves the creation and consumption of APIs. Everything is connected with APIs, and, in some organizations, there’s even API sprawl, which refers to the wide creation of APIs without control or standardization.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects Technology stacks and cloud-native applications

In modern software development, there is the concept of stacks. Developers and organizations have so many options that they can pick and choose a combination of technologies to create their own stack and then train or hire what are known as full-stack developers to work on those stacks. An example of a stack includes, for the most part, open source software such as Linux, a programming language, databases, streaming technology, runtimes, and DevOps tooling, all using and integrating with APIs.

From technology stacks, there are cloud-native applications which, refer to container-based applications. Today, there are many cloud-native options across all technologies; the cloud-native cloud computing foundation landscape is a sample of the available cloud-native ecosystem.

When organizations move from applications in a handful of containers to applications in dozens or even hundreds of containers, they need help managing and orchestrating all that infrastructure. Here is where Kubernetes comes into play. Kubernetes has become one of the most popular open source projects of our time, it has become the defacto infrastructure for cloud-native applications, and it has led to the creation of a new and growing ecosystem of Kubernetes operators; most popular software has now its own operator to make it easier to create, configure, and manage in Kubernetes environments, and, of course, operators integrate with Kubernetes APIs. Many available data technologies now have Kubernetes operators to facilitate and automate the use of stateful applications that integrate with Kubernetes APIs.

What is the API management layer?

A cloud-native environment also has its stack, cloud infrastructure, operating system, container orchestration, containers operators, application code, and APIs. All of this supports a software solution that integrates and exposes data to mobile devices, web applications, or other services, including IoT devices. Regardless of the combination of technologies, everything should be protected with API management platform functionality. The API management platform is the layer on top of the cloud-native applications that must be protected as data and APIs are exposed outside organizations’ networks.

And, talking about technology architectures, it’s highly important that the API management platform has flexible deployment options. The strategy and design should always include portability, the ability to move and deploy on different architectures (e.g., PaaS, on-premises, hybrid cloud, public cloud, or multi-cloud architectures).

[ Try API management for developers: Red Hat OpenShift API Management ]

3 API strategies to consider for cloud-native technologies

To design API strategy for the latest technologies, there are multiple options that can be summarized in three major areas. First, is a modernization strategy, from breaking monolithic applications into services, to go cloud-native and, of course, to integrate with mission-critical applications in mainframes. For this strategy, secured APIs are built and maintained. A second area to design an API strategy is what is known as headless architecture, the concept of adding features and functionality to APIs first and then optionally providing that functionality to the user interface. A granular architecture designed with microservices, or entirely based on APIs to facilitate integration and automation. The third API strategy area is to focus on is new technologies, from creating API ecosystems to attract customers and partners who contribute and consume public APIs, to selecting technology stacks and integrating them with new technologies, such as AI, serverless computing, and edge computing. Above all, every API strategy must include API management and a security mindset.

API management platforms should include the full lifecycle functionality for API design, testing, and security. Additional features, such as analytics, business intelligence, and an API portal, allow organizations to leverage DevOps and full lifecycle management for the development, testing, publishing, and consumption of APIs.

A couple of other examples of today’s latest technologies and how the knowledge and use of them can be part of an API strategy include the following: The first is DevOps integration. There is a variety of commercial and open source options for DevOps automation. Key pieces include continuous integration and continuous delivery tooling. The other very relevant space is data and AI technologies, a growing space with thousands of options for every stage of the AI development lifecycle, from data collection and organization to data analysis and the creation and training of ML and DL models. The final step in the AI development lifecycle should include automated deployment and maintenance of those ML and DL models. All of these steps should be combined with full integration of the different technologies via APIs and for external integrations, including data sources, with the important layer of an API management platform.

Open source and the API management layer

In summary, with all these new technologies from open source stacks and DevOps tooling to AI, the common layer of protection and management is the API management layer. There should be a security-first API strategy driven by API management, and it’s important to remember that in this day and age, APIs are everywhere and that the modern technology stacks will be integrated via APIs with data technologies (databases and storage), DevOps, and AI leading the pack. Don’t forget to design and manage APIs with security in mind. Regardless of the selected API strategy for modernization, as a headless architecture, or based on new technology, the API strategy must go hand in hand with your technology choices and vision for the future.

[ Take the free online course: Deploying containerized applications ]

With new technologies from open source stacks and DevOps tooling to AI, the common layer of protection and management is the API management layer.

Image by:

Opensource.com

Cloud Containers DevOps Kubernetes What to read next 5 open source tools for developing on the cloud This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 agile mistakes I've made and how to solve them

opensource.com - Tue, 04/26/2022 - 15:00
5 agile mistakes I've made and how to solve them Kelsea Zhang Tue, 04/26/2022 - 03:00 Up Register or Login to like.

Agile used to have a stigma as being "only suitable for small teams and small project management." It is now a famous discipline used by software development teams worldwide with great success. But does agile really deliver value? Well, it depends on how you use it.

My teams and I have used agile since I started in tech. It hasn't always been easy, and there's been a lot of learning along the way. The best way to learn is to make mistakes, so to help you in your own agile journey, here are five agile mistakes I've made.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles 1. Mistake: Agile only happens in development teams

Here's what happens when you restrict agile to just your development team. Your business team writes requirements for a project, and that goes to the development team, with a deadline. In this case, the development team isn't directly responsible for business goals.

There's very little communication between teams, let alone negotiation. No one questions the demands made by the business team, or whether there's a better way to meet the same business goal.

This can be discouraging to development teams, too. When developers are only responsible for filling in the code to make the machine work, they're disconnected from the business.

The final product becomes a monster, lacking reasonable abstraction and design.

Solution: Spread agile through your organization. Let everyone benefit from it in whatever way that's appropriate for their department, but most importantly let it unify everyone's goals.

2. Mistake: Automated testing is too much work to setup

The role of automated testing, especially Test Driven Development (TDD), is often undervalued by the IT industry. In my opinion, automated testing is the cornerstone of maintainable and high-quality software, and is even more important than production code.

However, most teams today don't have the ability to automate testing, or have the ability but refuse it because of time constraints. Programmers lack the ability to continuously refactor bad code without the protection of automated testing.

This is because no one can predict whether changing a few lines of code will cause new bugs. Without continuous refactoring, you increase your technical debt, which reduces your responsiveness to the demands of your business units.

Manual testing is slow, and forces you to sacrifice quality, testing just the changed part (which can be difficult), or lengthening the regression testing time. If the test time is too long, you have to test in batches to reduce the number of tests performed.

Suddenly, you're not agile any more. You've converted to Waterfall.

Solution: The key to automated testing is to have developers run tests, instead of hiring more testers to write scripts. That's why tests (written by testers) run slowly and only slowly produce feedback to programmers.

What's needed to improve code quality is rapid feedback on the program. The earlier an automated test is written, and the faster it's run, the more conducive it is for programmers to get feedback in a timely manner.

The fastest way to write automated tests is TDD. Write tests before you write the production code. The fastest way to run automated tests is unit testing.

3. Mistake: As long as it works, you can ignore code quality

People often say, "We're running out of time, just finish it."

They don't care about quality. Many people think that quality can be sacrificed for efficiency. So you end up writing low-quality code because you do not have time for anything else. In addition, low-quality code doesn't result in high performance.

Unless your program is as simple as a few lines of code, low-quality code will hold you back as code complexity increases. Software is called "soft" because we expect it to be easy to change. Low-quality code becomes increasingly difficult to change because a small change can lead to thousands of new bugs.

Solution: The only way to improve code quality is to improve your skills. Most people can't write high-quality code in one sitting. That's why you need constant refactoring! (And you must implement automated testing to support constant refactoring).

4. Mistake: Employees should specialize in just one thing

It feels natural to divide personnel into specialized teams. One employee might belong to the Android group, another to the iOS group, another to the background group, and so on. The danger is that teams with frequent changes mean that specialization is difficult to sustain.

Solution: Many practices in agile are based on teams such as team velocity, retrospective improvement, and staff turnover. Agile practices revolve around teams and around people. Help your team members diversify, learn new skills, and share knowledge.

5. Mistake: Writing requirements takes too much time

As the saying goes "Garbage in Garbage out," and a formal software requirement is the "input" of software development. Good software cannot be produced without clear requirements.

In the tech industry, I have found that good product owners are more scarce than good programmers. After all, no matter how poorly a programmer writes code, it usually at least runs (or else it doesn't ship).

For most product managers, there is no standard to measure the efficacy of their product definitions and requirements. Here are a few of the issues I've seen over the years:

  • Some product owners are devoted to designing solutions while ignoring user value.

This results in a bunch of costly, but useless functions.

  • Some product managers can only tell big stories, and can't split requirements into small, manageable pieces, resulting in large delivery batches and reduced agility.

  • Some product owners have incomplete requirement analysis, resulting in bug after bug.

  • Sometimes product owners don't prioritize requirements, which leads to teams wasting a lot of time on low-value items.

Solution: Create clear, concise, and manageable requirements to help guide development.

Make mistakes

I've given you five tips on some mistakes to avoid. Don't worry, though, there are still plenty of mistakes to make! Take agile to your organization, don't be afraid of enduring a few mistakes for the benefit of making your teams better.

Once you've taken the inevitable missteps, you'll know what to do differently the next time around. Agility is designed to survive mistakes. That's one of its strengths: it can adapt. So get started with agile, be ready to adapt, and make better software!

Take agile to your organization, don't be afraid of enduring a few mistakes for the benefit of making your teams better.

Agile What to read next What do open source product teams do? This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

RADV Exploring "A Driver On The GPU" In Moving More Vulkan Tasks To The GPU

Phoronix - Tue, 04/26/2022 - 07:32
In order to fully support Direct3D Indirect Drawing for allowing more rendering tasks to be moved from the CPU to the GPU, the open-source RADV Radeon Vulkan driver is working on experimental code for effectively hosting "a driver on the GPU."..

Pages