Open-source News

Intel Prepares Linux For ATS-M No-Display Server GPUs

Phoronix - Tue, 03/29/2022 - 18:08
Earlier this year with the Intel Media Driver 22 there was enablement of "ATS-M" with references to "Arctic Sound Mainstream" . Now the Linux kernel patches have arrived with the changes needed on their end for this DG2-based discrete GPU that now sums up ATS-M as a display-less GPU for servers...

AMD Starts Working On New Sound Code For Upcoming Platforms With Linux 5.18

Phoronix - Tue, 03/29/2022 - 17:50
The sound subsystem updates were sent in last week for the ongoing Linux 5.18 merge window. There is a lot of new audio hardware enablement and other improvements to find with this sound pull for the new kernel...

Linux 5.18 Switches From Zero Length Arrays To Flexible Array Members

Phoronix - Tue, 03/29/2022 - 17:19
Back in 2020 the Linux kernel tried adding flexible array members to replace zero length arrays but that time the code was reverted shortly thereafter. For Linux 5.18 the tree-wide change of replacing zero length arrays with C99 flexible array members was merged and appears to be all in good shape this time...

Virtual Kubernetes clusters: A new model for multitenancy

opensource.com - Tue, 03/29/2022 - 15:00
Virtual Kubernetes clusters: A new model for multitenancy Lukas Gentele Tue, 03/29/2022 - 03:00 Up 1 reader likes this

If you speak to people running Kubernetes in production, one of the complaints you'll often hear is how difficult multitenancy is. Organizations use two main models to share Kubernetes clusters with multiple tenants, but both present issues. The models are:

  • Namespace-based multitenancy
  • Cluster-based multitenancy

The first common multitenancy model is based on namespace isolation, where individual tenants (a team developing a microservice, for example) are limited to using one or more namespaces in the cluster. While this model can work for some teams, it has flaws. First, restricting team members to accessing resources only in namespaces means they can't administer global objects in the cluster, such as custom resource definitions (CRDs). This is a big problem for teams working with CRDs as part of their applications or in a dependency (for example, building on top of Kubeflow or Argo Pipelines).

Explore the open source cloud Free online course: Developing cloud-native applications with microservices arc… eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated What is Kubernetes? Understanding edge computing Latest articles for IT architects

Second, a much bigger long-term maintenance issue is the need to constantly add exceptions to the namespace isolation rules. For example, when using network policies to lock down individual namespaces, admins likely find that some teams eventually need to run multiple microservices that communicate with each other. The cluster administrators somehow need to add exceptions for these cases, track them, and manage all these special cases. Of course, the complexity grows as time passes and more teams start to onboard to Kubernetes.

The other standard multitenancy model, using isolation at the cluster level, is even more problematic. In this scenario, each team gets its own cluster, or possibly even multiple clusters (dev, test, UAT, staging, etc.). The immediate problem with using cluster isolation is ending up with many clusters to manage, which can be a massive headache. And all of those clusters need expensive cloud computing resources, even if no one is actively using them, such as at night or over the weekend. As Holly Cummins points out in her KubeCon 2021 keynote, this explosion of clusters has a dangerous impact on the environment.

Until recently, cluster administrators had to choose between these two unsatisfying models, picking the one that better fits their use case and budget. However, there is a relatively new concept in Kubernetes called virtual clusters, which is a better fit for many use cases.

What are virtual clusters?

A virtual cluster is a shared Kubernetes cluster that appears to the tenant as a dedicated cluster. In 2020, our team at Loft Labs released vcluster, an open source implementation of virtual Kubernetes clusters.

With vcluster, engineers can provision virtual clusters on top of shared Kubernetes clusters. These virtual clusters run inside the underlying cluster's regular namespaces. So, an admin could spin up virtual clusters and hand them out to tenants, or—if an organization already uses namespace-based multitenancy, but users are restricted to a single namespace—tenant users could spin up these virtual clusters themselves inside their namespace.

This combines the best of both multitenancy approaches described above: Tenants are restricted to a single namespace with no exceptions needed because they have full control inside the virtual cluster but very restricted access outside the virtual cluster.

Like a cluster admin, the user has full control inside a virtual cluster. This allows them to do anything within the virtual cluster without impacting other tenants on the underlying shared host cluster. Behind the scenes, vcluster accomplishes this by running a Kubernetes API server and some other components in a pod within the namespace on the host cluster. The user sends requests to that virtual cluster API server inside their namespace instead of the underlying cluster's API server. The cluster state of the virtual cluster is also entirely separate from the underlying cluster. Resources like Deployments or Ingresses created inside the virtual cluster exist only in the virtual cluster's data store and are not persisted in the underlying cluster's etcd.

This architecture offers significant benefits over the namespace isolation and cluster isolation models:

  1. Since the user is an administrator in their virtual cluster, they can manage cluster-wide objects like CRDs, which overcomes that big limitation of namespace isolation.
  2. Since users communicate with their own API servers, their traffic is more isolated than in a normal shared cluster. This also provides federation, which can help with scaling API requests in high-traffic clusters.
  3. Virtual clusters are very fast to provision and tear down again, so users can benefit from using truly ephemeral environments and potentially spin up many of them if needed.

[ Learn what it takes to develop cloud-native applications using modern tools. Download the eBook Kubernetes-native microservices with Quarkus and MicroProfile. ] 

How to use virtual clusters

There are many use cases for virtual clusters, but here are a few that we've seen most vcluster users adopt.

Development environments

Provisioning and managing dev environments is currently the most popular use case for vcluster. Developers writing services that run in Kubernetes clusters need somewhere to run their applications while they're in development. While it's possible to use tools like Docker Compose to orchestrate containers for dev environments, developers who code against Kubernetes clusters will have an experience much closer to how their services run in production.

Another option for local development is using a tool like Minikube or Docker Desktop to provision Kubernetes clusters, but that has some downsides. Developers must own and maintain that local cluster stack, which is a burden and a huge time sink. Also, those local clusters may need a lot of computing power, which is difficult on local dev machines. We all know how hot laptops can get during development, and it may not be a good idea to add Kubernetes to the mix.

Running virtual clusters as dev environments in a shared dev cluster addresses those concerns. In addition, as mentioned above, vclusters are quick to provision and delete. Admins can remove a vcluster just by deleting the underlying host namespace with a single kubetctl command, or by running the vcluster delete command provided with the command-line interface tool. The speed of infrastructure and tooling in dev workflows is critical because improving cycle times for developers can increase their productivity and happiness.

CI/CD pipelines

Continuous integration/continuous development (CI/CD) is another strong use case for virtual clusters. Typically, pipelines provision systems under test (SUTs) to run test suites against. Often, teams want those to be fresh systems with no accumulated cruft that may interfere with testing. Teams running long pipelines with many tests may be provisioning and destroying SUTs multiple times in a test run. If you've spent much time provisioning clusters, you have probably noticed that spinning up a Kubernetes cluster is often a time-consuming operation. Even in the most sophisticated public clouds, it can take more than 20 minutes.

Virtual clusters are fast and easy to provision with vcluster. When running the vcluster create command to provision a new virtual cluster, all that's involved behind the scenes is running a Helm chart and installing a few pods. It's an operation that usually takes just a few seconds. Anyone who runs long test suites knows that any time shaved off the process can make a huge difference in how quickly the QA team and engineers receive feedback.

In addition, organizations could use vcluster's speed to improve any other processes where lots of clusters are provisioned, like creating environments for workshops or customer training.

Testing different Kubernetes versions

As mentioned earlier, vcluster runs a Kubernetes API server in the underlying host namespace. It uses the K3s (Lightweight Kubernetes) API server by default, but you can also use k0s, Amazon Elastic Kubernetes Service, or the regular upstream Kubernetes API server. When you provision a vcluster, you can specify the version of Kubernetes to run it with, which opens up many possibilities. You could:

  • Run a newer Kubernetes version in the virtual cluster to get a look at how an app will behave against the newer API server.
  • Run multiple virtual clusters with different versions of Kubernetes to test an operator in a set of different Kubernetes distros and versions while developing or during end-to-end testing.
Learn more

There may not be a perfect solution for Kubernetes multitenancy, but virtual clusters address many issues with current tenancy models. Vcluster's speed and ease of use make it a great candidate for many scenarios where you would prefer to use a shared cluster but also wish to give users the flexibility to administer their clusters. There are many use cases for vcluster beyond the ones described in this article.

To learn more, head to vcluster.com, or if you'd like to dive right into the code, download it from the GitHub repo. The Loft Labs team maintains vcluster, and we love getting ideas on it. We have added many features based on user feedback. Please feel free to open issues or PRs. If you'd like to chat with us first about your ideas or have any questions while exploring vcluster, we also have a vcluster channel on Slack.

Try vcluster, an open source implementation that tackles certain aspects of typical namespace- and cluster-based isolation models.

Image by:

Opensource.com

Kubernetes Containers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 32 points Open Enthusiast Register or Login to post a comment.

5 key insights for open source project sustainability in 2022

opensource.com - Tue, 03/29/2022 - 15:00
5 key insights for open source project sustainability in 2022 Sean P. Goggins Tue, 03/29/2022 - 03:00 Up 2 readers like this

Many technology firms are turning to open source tools to accelerate innovation and growth. As these firms work to influence open source projects, governance practices sometimes shift from coordination among a small group of developers and firms to management by large communities of contributors and organizations, often with competing priorities.

Sustainable projects require sustainable communities. Adapting to a larger, more competitive open source landscape requires organizations to invest in community building. This demands a view of source-code availability that's inextricably connected to the social engagements of contributors and organizations in open source projects. Many organizations now consider open source community engagement as both a social and a technical—or "sociotechnical"—investment.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

The CHAOSS project seeks to improve the transparency and actionability of open source projects and community health. CHAOSS has identified and defined metrics that meaningfully assess open source community health.

Some CHAOSS metrics provide indicators of a wide range of social factors now essential for understanding the shape of sustainable open source communities. Social metrics require more sophisticated collection and interpretation strategies, such as machine learning, as well as techniques like surveys and the specification of proven practices for attracting and retaining new contributors. Analyzing the social dimension of open source projects focuses on understanding the dynamics of human relations and communities.

The insights into open source community health below result from approximately 36 interviews with corporate open source participants. The names of interview subjects quoted here have been withheld to protect their privacy.

Community building

Corporate open source participants must recognize that community building is central to sustaining open source projects. Answers to such questions as "Is the community welcoming?" and "Whose voices are heard?" are important considerations when building and joining communities:

One thing we definitely look for before we're going to engage [in an open source project] is what's the vibe here? And that's often a very social thing. Do they seem to enforce the code of conduct? Are there people in this community that seem abusive … in various ways?

Community activity

Most project contributors recognize the growing significance of community as part of successful open source projects. They want active communities that continue to advance the project's goals in ways that support all members and reflect the variety of opinions found in community work. They want members that are nice to see and work with regularly. Frequent and positive engagement is becoming so important that a lack of activity or proliferation of toxic interactions are common reasons people exit a community. In fact, it can even be more important than any technical support the community provides:

I worked with the [project] board for a while, and it is such an active one. Unfortunately, there's just a lot of derogatory terms used. The thing about open source, it's all usually text-based, [and] that can be harmful. We'll pull people away from contributing if they don't feel comfortable. I've seen people leave different communities based on how things were handled. I think that the [worst] I've seen is basically just derogatory terms, not necessarily based on race, religion, or gender, just somebody angrily lashing out based on the code they don't want to see or do want to see.

Diversity and inclusion

Equally important is how the community addresses diversity, equity, and inclusion (DEI). Potential contributors ask themselves questions such as, "How will I be treated?" and "How will I treat others?" People understand attention to DEI (or lack thereof) as a critical part of project risk and sustainability. Failure of a community to center DEI in the sociotechnical work of a project influences contributors' decision to join or leave an open source community, regardless of what a community may provide technically:

The moment you see any bullying, like racism, bigotry, or anything that looks like excluding others from the conversation—if that gets called out and dealt with, I mean, it's not great that it happened, but either the person took the feedback and improved their behavior, or they were told to leave. [Either is a good outcome.] But if you see that stuff's happening and no one's doing anything about it, and everybody is just sitting on their hands and waiting for somebody else to do something about it, that's problematic. I don't have time for that. That would cause me to leave.

Community culture

The importance of nurturing a sense of community in open source connects closely to contributor expectations of a welcoming environment for people from diverse backgrounds. In response to these considerations, open source projects must constantly reflect on how to signal that the community is healthy, build trust among community members, and lower barriers to community participation in the interest of their own sustainability.

The way an open source community responds to issues is a very strong indicator as to their balance between [being] receptive to feedback and not receptive to direct contamination. I think that a successful open source project is a balance between the confidence that your code can be scrutinized by others and the humility that random people you don't know can improve your code. That balance between confidence and humility is reflected in the way people respond to issues. So that's what I look for.

Healthy competition

The pressures to align corporate interests with project interests can result in oversteering. A business trying to exert a degree of control that undermines a sense of community is harmful to a project and its maintainers. Creating a healthy community requires a balance of corporate control through paid contributions alongside thoughtful community building.

[One large open source company, for example,] has resources assigned to the project that are being driven by their own internal teams, and their priorities are very much not in our control. Collaborating in that situation is not as attractive as finding a way to do our own thing.

Building sustainability means building community

A technical open source asset's significance and quality depend on a mutually respectful social system, and that is a new reality for most corporate open source participants. Effective corporate engagement with open source projects demands attention to a set of paradoxes. To be competitive, a firm needs to contribute to a set of open source projects they don't control, and competitors are working side by side in these projects. To create a communal environment where a project becomes and remains sustainable, participants must set aside their competitive instincts and foster trust. A rising tide of both social and technical concerns floats all boats.

Open source software's role in creating value for technology firms continues to grow because sharing the costs of creating and sustaining core infrastructure is not only attractive but arguably a requirement for doing business. Sustaining these critical technology assets demands such a high number of talented contributors that forming and nurturing communities around open source software is vital.

Healthy open source communities engender a sense of purpose and belonging for their contributors, so that people continue to want to participate or join. Such communities are made of real and diverse people, with their own interests, concerns, and lives. Open source contributors—whether individual or corporate—must build real communities with participants who are interested in each other as well as the project. That requires a thoughtful, attentive, and often retrospective focus on how we build and manage our open source communities.

We would like to thank Red Hat for its generous support of this work.

[ Explore why companies are choosing open source: Red Hat's State of Enterprise Open Source Report ]

New research unveils how corporate open source participation has brought renewed attention to community health.

Image by:

Opensource.com

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 48 points Omaha, NE

Matt Germonprez is the Mutual of Omaha Associate Professor of Information Systems in the College of Information Science & Technology at the University of Nebraska at Omaha. He uses qualitative field-studies to research corporate engagement with open communities and the dynamics of design in these engagements. His lines of research have been funded by numerous organizations including the National Science Foundation, the Alfred P. Sloan Foundation, and Mozilla. Matt is the co-founder of the Association for Information Systems SIGOPEN and the Linux Foundation Community Health Analytics OSS Project (CHAOSS). He and has had work accepted at ISR, MISQ, JAIS, JIT, ISJ, I&O, CSCW, OpenSym, Group, HICSS, and ACM Interactions. Matt is an active open source community member, having presented design and development work at LinuxCon, the Open Source Summit North America, the Linux Foundation Open Compliance Summit, the Linux Foundation Collaboration Summit, and the Open Source Leadership Summit.

| Follow germ Open Enthusiast Open health 34 points United States Open Enthusiast 31 points Cincinnati, Ohio

Elizabeth Barron is the Community Manager for CHAOSS and is a long time open source contributor and advocate with over 20 years experience at companies like GitHub, Pivotal/VMWare Tanzu, and Sourceforge. She is also an author, public speaker, event organizer, and award winning nature photographer. She lives in Cincinnati, Ohio.

| Follow ElizabethN Open Enthusiast 34 points

Kevin Lumbard is a doctoral candidate at the University of Nebraska at Omaha and a CHAOSS project maintainer. His research explores corporate engagement with open source and the design of open source critical digital infrastructure.

| Follow @paper_monkeys Open Enthusiast 250 points Cary, NC

Brian Proffitt is Manager, Community Insights within Red Hat's Open Source Program Office, focusing on content generation, community metrics, and special projects. Brian's experience with community management includes knowledge of community onboarding, community health, and business alignment. Prior to joining Red Hat in 2014, he was a technology journalist with a focus on Linux and open source, and the author of 22 consumer technology books.

| Follow TheTechScribe Open Minded Linux Community Manager Geek Author Contributor Club Register or Login to post a comment.

Google Has A Problem With Linux Server Reboots Too Slow Due To Too Many NVMe Drives

Phoronix - Tue, 03/29/2022 - 12:00
Hyperscaler problems these days? Linux servers taking too long to reboot due to having too many NVMe drives. Thankfully Google is working on an improvement to address this where some of their many-drive servers can take more than one minute for the Linux kernel to carry out its shutdown tasks while this work may benefit other users too albeit less notably...

EROFS Read-Only Linux File-System Working Toward New Features

Phoronix - Tue, 03/29/2022 - 12:00
EROFS as a reminder is the read-only Linux file-system originally introduced four years ago that has gone on to see some use particularly by Android devices. While there hasn't been much to report on EROFS in recent time, they are approaching some new functionality in coming kernels...

Best Whiteboard Applications for Your Linux Systems

Tecmint - Tue, 03/29/2022 - 11:44
The post Best Whiteboard Applications for Your Linux Systems first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

A whiteboard is a kind of console that you can attach to your desktop computer and use to write down ideas very quickly. Writing directly on the screen makes it seem more like modern

The post Best Whiteboard Applications for Your Linux Systems first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

LLVM Begins Landing The Initial DirectX / HLSL Target Code

Phoronix - Tue, 03/29/2022 - 06:40
Earlier this month I wrote about Microsoft engineers wanting to add DirectX and HLSL support into the upstream LLVM/Clang compiler. As of this week the very early bits of code are beginning to land in LLVM 15.0 for this Microsoft graphics effort...

MLH Fellowship Opens Applications for this Summer’s Production Engineering Track

The Linux Foundation - Tue, 03/29/2022 - 04:58

For the second summer, Major League Hacking (MLH) is running the Production Engineering Track of the MLH Fellowship, powered by Meta. This 12-week educational program is 100% remote and uses industry-leading curriculum from Linux Foundation Training & Certification.  The program is hands-on, project-based, and teaches students how to become Production Engineers.  The goal of the program is for all participants to land a job or internship in the Site Reliability Engineering space, and it will be opened to 100 active college students who meet our admissions criteria.

This Summer’s program will start on May 31, 2022 and will end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Apply Now!

What is Production Engineering?

Production Engineering, also known as Site Reliability Engineering and DevOps, is one of the most in-demand skill sets that leading technology companies are hiring for. However, it is not widely available as a class offering in university settings.

At Meta, Production Engineers (PEs) are a hybrid between software and systems engineers and are core to engineering efforts that keep Meta platforms running and scaling. PEs work within Meta’s product and infrastructure teams to make sure products and services are reliable and scalable; this means, writing code and debugging hard problems in production systems across Meta services – like Instagram, WhatsApp, and Oculus – and backend services like Storage, Cache, and Network.

What is the Production Engineering Track of the MLH Fellowship?

Launched in the summer of 2020, the MLH Fellowship first focused on Open Source Software projects, pairing early career software engineers with projects and engineers from widely-used open source codebases (like AWS, GitHub, and Solana Labs). During the program, Fellows learned important concepts and software practices while contributing production-level code to their projects and showcasing those contributions in their portfolio. Through the Fellowship, 700 global alumni have learned Open Source skills and tools and increased their professional networks in the process.

The Production Engineering Track takes this proven fellowship model and expands on it. As part of the Production Engineering Track, fellows are put in groups of 10 (“Pods”), matched to dedicated mentors from Meta Engineering while they work through projects and curriculum, and receive guidance from Meta’s Talent Acquisition team, too. Successful program graduates will be invited to apply to full-time Meta internships.

What will admitted fellows learn in the Production Engineering Track?

Program participants will gain practical skills from educational content – adopted by the MLH Curriculum Team – licensed from the Linux Foundation’s “Essentials of System Administration” course. The program covers how to administer, configure and upgrade Linux systems, along with the tools and concepts necessary to build and manage a production Linux infrastructure. The complete list of topics covered in the program includes:

  • Linux Fundamentals
  • Scripting
  • Databases
  • Services
  • Testing
  • Containers
  • CI/CD
  • Monitoring
  • Networking
  • Troubleshooting
  • Interview skills

By pairing this industry-leading curriculum with hands-on, project-based learning – and engineering mentors from Meta – fellows in the Production Engineering Track greatly build on their programming knowledge. Fellows will learn a broader array of technology skills, opening the door to new career options in SRE.

What are the important dates I should know about?

The program will be available to roughly 100 aspiring software engineers and will start on May 31, 2022 and end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Will I get paid as part of the program?

Each successful participant will earn an educational stipend adjusted for Purchasing Power Parity for the country they’re located in.

Who is eligible?

Eligible students are:

  • Rising sophomores or juniors enrolled in a 4 year degree granting program
  • United States, Mexico, or Canada-based
  • Able to code in at least one language (preferably Python)
  • Can dedicate at least 30 hours/week for the 12-weeks of the program

MLH invites and encourages people to apply who identify as women or non-binary. MLH also invites and encourages people to apply who identify as Black/African American or LatinX. In partnership with Meta, MLH is committed to building a more diverse and inclusive tech industry and providing learning opportunities to under-represented technologists.

Apply Now!

This article was originally posted at Major League Hacking.

The post MLH Fellowship Opens Applications for this Summer’s Production Engineering Track appeared first on Linux Foundation.

Pages