The Linux Foundation

Subscribe to The Linux Foundation feed The Linux Foundation
Decentralized innovation, built on trust.
Updated: 2 hours 2 min ago

Linux Foundation Announces an Intent to Form the OpenWallet Foundation

Tue, 09/13/2022 - 15:00
A Consortium of Companies and Non Profit Organizations Collaborating to Create an Open Source Software Stack to Advance a Plurality of Interoperable Wallets

DUBLIN—September 13, 2022—The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced the intention to form the OpenWallet Foundation (OWF), a new collaborative effort to develop open source software to support interoperability for a wide range of wallet use cases. The initiative already benefits from strong support including leading companies across technology, public sector, and industry vertical segments, and standardization organizations.

The mission of the OWF is to develop a secure, multi-purpose open source engine anyone can use to build interoperable wallets. The OWF aims to set best practices for digital wallet technology through collaboration on open source code for use as a starting point for anyone who strives to build interoperable, secure, and privacy-protecting wallets.

The OWF does not intend to publish a wallet itself, nor offer credentials or create any new standards. The community will focus on building an open source software engine that other organizations and companies can leverage to develop their own digital wallets.  The wallets will support a wide variety of use cases from identity to payments to digital keys and aim to achieve feature parity with the best available wallets.

Daniel Goldscheider, who started the initiative, said, “With the OpenWallet Foundation we push for a plurality of wallets based on a common core. I couldn’t be happier with the support this initiative has received already and the home it found at the Linux Foundation.”

Linux Foundation Executive Director Jim Zemllin said, “We are convinced that digital wallets will play a critical role for digital societies. Open software is the key to interoperability and security. We are delighted to host the OpenWallet Foundation and excited for its potential.”

OpenWallet Foundation will be featured in a keynote presentation at Open Source Summit Europe on 14 September 2022 at 9:00 AM IST (GMT +1) and a panel at 12:10 PM IST (GMT +1). In order to participate virtually and/or watch the sessions on demand, you can register here

Pramod Varma, Chief Architect Aadhaar & India Stack, said, “Verifiable credentials are becoming an essential digital empowerment tool for billions of people and small entities. India has been at the forefront of it and is going all out to convert all physical certificates into digitally verifiable credentials via the very successful Digilocker system. I am very excited about the OWF effort to create an interoperable and open source credential wallet engine to supercharge the credentialing infrastructure globally.”

“Universal digital wallet infrastructure will create the ability to carry tokenized identity, money, and objects from place to place in the digital world. Massive business model change is coming, and the winning digital business will be the one that earns trust to directly access the real data in our wallets to create much better digital experiences,” said David Treat, Global Metaverse Continuum Business Group & Blockchain lead, Accenture. “We are excited to be part of the launch and development of an open-source basis for digital wallet infrastructure to help ensure consistency, interoperability, and portability with privacy, security, and inclusiveness at the core by design.”

Drummond Reed, Director of Trust Services at Avast, a brand of NortonLifeLock, said, “We’re on a mission to protect digital freedom for everyone. Digital freedom starts with the services used by the individual and the ability to reclaim their personal information and reestablish trust in digital exchanges. Great end point services start with the core of digital identity wallet technology. We are proud to be a founding supporter of the OpenWallet Foundation because collaboration, interoperability, and open ecosystems are essential to the trusted digital future that we envision.”

“The mobile wallet industry has seen significant advances in the last decade, changing the way people manage and spend their money, and the tasks that these wallets can perform have rapidly expanded. Mobile wallets are turning into digital IDs and a place to store documents whereby the security requirements are further enhanced,” said Taka Kawasaki CoFounder of Authlete Inc. “We understand the importance of standards that ensure interoperability as a member of the OpenID Foundation and in the same way we are excited to work with the Linux Foundation to develop a robust implementation to ensure the highest levels in security.”

“Providing secure identity and validated credential services are key for enabling a high assurance health care service. The OpenWallet Foundation could contribute a key role in promoting the deployment of highly effective secure digital health care systems that benefits the industry,” said Robert Samuel, Executive Director of Technology Research & Innovation, CVS Health.

“Daon provides the digital identity verification/proofing and authentication technology that enables digital trust at scale and on a global basis”, said Conor White, President – Americas at Daon, “Our experience with VeriFLY demonstrated the future importance of digital wallets for consumers and we look forward to supporting the OpenWallet Foundation.”

“We are building and issuing wallets for decentralized identity applications for several years now. Momentum and interest for this area has grown tremendously, far beyond our own community. It is now more important than ever that a unified wallet core embracing open standards is created, with the ambition to become the global standard. The best industry players are pulling together under the OpenWallet Foundation. esatus AG is proud to be among them as experience, expertise, and technology contributo,” said Dr. Andre Kudra, CIO, esatus AG 

Kaliya Young, Founder & Principal, Identity Woman in Business, said, “As our lives become more and more digital, it is critical to have strong and interoperable digital wallets that can properly safeguard our digital properties, whether it is our identities, data, or money. We are very excited to see the emergence of the OpenWallet Foundation, particularly its mission to bring key stakeholders together to create a core wallet engine (instead of another wallet) that can empower the actual wallet providers to build better products at lower cost. We look forward to supporting this initiative by leveraging our community resources and knowledge/expertise to develop a truly collaborative movement.”

Masa Mashita, Senior Vice President, Strategic Innovations, JCB Co., Ltd. said, “Wallets for the identity management as well as the payment will be a key function for the future user interface. The concept of OpenWallet will be beneficial for the interoperability among multiple industries and jurisdictions.”

“Secure and open wallets will allow individuals the world over to store, combine and use their credentials in new ways – allowing them to seamlessly assert their identity, manage payments, access services, etc., and empower them with control of their data. This brings together many of our efforts in India around identity, payments, credentials, data empowerment, health, etc. in an open manner, and will empower billions of people around the world,” said Sanjay Jain, Chairman of the Technology Committee of MOSIP.

“The Open Identity Exchange (OIX) welcomes and supports the creation of the OpenWallet Foundation. The creation of open source components that will allow wallet providers to work to standards and trust framework policies in a consistent way is entirely complementary to our own work on open and interoperable Digital Identities. OIX’s Global Interoperability working group is already defining a ‘trust framework policy characteristics methodology,’ as part of our contribution to GAIN. This will allow any trust framework to systematically describe itself to an open wallet, so that a ‘smart wallet’ can seamlessly adapt to the rules of a new framework within which the user wants to assert credentials,” said Nick Mothershaw, Chief Identity Strategist, OIX.

“Okta’s vision is to enable anyone to safely use any technology”, says Randy Nasson, Director of Product Management at Okta. “Digital wallets are emerging as go-to applications for conducting financial transactions, providing identity and vital data, and storing medical information such as vaccination status. Wallets will expand to include other credentials, including professional and academic certifications, membership status, and more. Digital credentials, including their issuance, storage in wallets, and presentation, will impact the way humans authenticate and authorize themselves with digital systems in the coming decade. Okta is excited about the efforts of the OpenWallet Foundation and the Linux Foundation to provide standards-based, open wallet technology for developers and organizations around the world.”

“The OpenID Foundation welcomes the formation of the OpenWallet Foundation and its efforts to create an open-source implementation of open and interoperable technical standards, certification and best practices.” – Nat Sakimura, Chairman, OpenID Foundation.

 “We believe the future of online trust and privacy starts with a system for individuals to take control over their digital identity, and interoperability will create broad accessibility,” says Rakesh Thaker, Chief Development Officer at Ping Identity. “We intend to actively participate and contribute to creating common specifications for secure, robust credential wallets to empower people with control over when and with whom they share their personal data.”

Wallet technologies that are open and interoperable are a key factor in enabling citizens to protect their privacy in the digital world. At polypoly – an initiative backed by the first pan-European cooperative for data – we absolutely believe that privacy is a human right! We are already working on open source wallets and are excited to collaborate with others and to contribute to the OpenWallet Foundation,” said Lars Eilebrecht, CISO, polypoly.

“Digital credentials and the wallets that manage them form the trust foundation of a digital society. With the future set to be characterised by a plurality of wallets and underlying standards, broad interoperability is key to delivering seamless digital interactions for citizens. Procivis is proud to support the efforts of the OpenWallet Foundation to build a secure, interoperable, and open wallet engine which enables every individual to retain sovereignty over their digital identities,”  Daniel Gasteiger, Chief Executive Officer, Procivis AG.

“It is essential to cross the boundaries between humans, enterprises, and systems to create value in a fully connected world. There is an urgent need for a truly portable, interoperable identity & credentialing backbone for all digital-first processes in government, business, peer-to-peer, smart city systems, and the Metaverse. The OpenWallet Foundation will establish high-quality wallet components that can be assembled into SW solutions unlocking a new universe of next-level digitization, security, and compliance,” said Dr. Carsten Stöcker, CEO Spherity & Chairman of the Supervisory Board IDunion SCE.

“Transmute has long promoted open source standards as the foundation for building evolved solutions that challenge the status quo. Transmute believes any organization should be empowered to create a digital wallet that can securely manage identifiers, credentials, currencies, and payments while complying with regulatory requirements regarding trusted applications and devices. Transmute supports a future of technology that will reflect exactly what OpenWallet Foundation wants to achieve: one that breaks with convention to foster innovation in a secure, interoperable way, benefitting competitive companies, consumers, and developers alike,” said Orie Steele, Co-Founder and CTO of Transmute.

“The Trust Over IP (ToIP) Foundation is proud to support the momentum of an industry-wide open-source engine for digital wallets. We believe this can be a key building block in our mission to establish an open standard trust layer for the Internet. We look forward to our Design Principles and Reference Architecture benefitting this endeavor and collaborating closely with this new Linux Foundation project,” said Judith Fleenor, Director of Strategic Engagement, Trust Over IP Foundation.

For more information about the project and how to participate in this work, please visit:

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 3,000 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, PyTorch, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.

Media Contact:

Dan Whiting
for the Linux Foundation
+1 202-531-9091

The post Linux Foundation Announces an Intent to Form the OpenWallet Foundation appeared first on Linux Foundation.

Meta Transitions PyTorch to the Linux Foundation, Further Accelerating AI/ML Open Source Collaboration

Mon, 09/12/2022 - 21:25

PyTorch Foundation to foster an ecosystem of vendor-neutral projects alongside founding members AMD, AWS, Google Cloud, Meta, Microsoft Azure, and NVIDIA 

DUBLIN – September 12, 2022 –  The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced PyTorch is moving to the Linux Foundation from Meta where it will live under the newly-formed PyTorch Foundation. Since its release in 2016, over 2400 contributors and 18,0000 organizations have adopted the PyTorch machine learning framework for use in academic research and production environments. The Linux Foundation will work with project maintainers, its developer community, and initial founding members of PyTorch to support the ecosystem at its new home.

Projects like PyTorch—that have the potential to become a foundational platform for critical technology—benefit from a neutral home. As part of the Linux Foundation, PyTorch and its community will benefit from many programs and support infrastructure like training and certification programs, research, and local to global events. Working inside and alongside the Linux Foundation, PyTorch will have access to the LFX collaboration portal—enabling mentorships and helping the PyTorch community identify future leaders, find potential hires, and observe shared project dynamics. 

“Growth around AI/ML and Deep Learning has been nothing short of extraordinary—and the community embrace of PyTorch has led to it becoming one of the five-fastest growing open source software projects in the world,” said Jim Zemlin, executive director for the Linux Foundation. “Bringing PyTorch to the Linux Foundation where its global community will continue to thrive is a true honor. We are grateful to the team at Meta—where PyTorch was incubated and grown into a massive ecosystem—for trusting the Linux Foundation with this crucial effort.”

“Some AI news: we’re moving PyTorch, the open source AI framework led by Meta researchers, to become a project governed under the Linux Foundation. PyTorch has become one of the leading AI platforms with more than 150,000 projects on GitHub built on the framework. The new PyTorch Foundation board will include many of the AI leaders who’ve helped get the community where it is today, including Meta and our partners at AMD, Amazon, Google, Microsoft, and NVIDIA. I’m excited to keep building the PyTorch community and advancing AI research,” said Mark Zuckerberg, Founder & CEO, Meta.

The Linux Foundation has named Dr. Ibrahim Haddad, its Vice President of Strategic Programs, as the Executive Director of the PyTorch Foundation.  The PyTorch Foundation will support a strong member ecosystem with a diverse governing board including founding members: AMD, Amazon Web Services (AWS), Google Cloud, Meta, Microsoft Azure and NVIDIA. The project will promote continued advancement of the PyTorch ecosystem through its thriving maintainer and contributor communities. The PyTorch Foundation will ensure the transparency and governance required of such critical open source projects, while also continuing to support its unprecedented growth.

Member Quotes


“Open software is critical to advancing HPC, AI and ML research, and we’re ready to bring our experience with open software platforms and innovation to the PyTorch Foundation,” said Brad McCredie, corporate vice president, Data Center and Accelerated Processing, AMD. “AMD Instinct accelerators and ROCm software power important HPC and ML sites around the world, from exascale supercomputers at research labs to major cloud deployments showcasing the convergence of HPC and AI/ML. Together with other foundation members, we will support the acceleration of science and research that can make a dramatic impact on the world.”

Amazon Web Services

“AWS is committed to democratizing data science and machine learning, and PyTorch is a foundational open source tool that furthers that goal,” said Brian Granger, senior principal technologist at AWS. “The creation of the PyTorch Foundation is a significant step forward for the PyTorch community. Working alongside The Linux Foundation and other foundation members, we will continue to help build and grow PyTorch to deliver more value to our customers and the PyTorch community at large.”

Google Cloud

“At Google Cloud we’re committed to meeting our customers where they are in their digital transformation journey and that means ensuring they have the power of choice,” said Andrew Moore, vice president and general manager of Google Cloud AI and industry solutions. “We’re participating in the PyTorch Foundation to further demonstrate our commitment of choice in ML development. We look forward to working closely on its mission to drive adoption of AI tooling by building an ecosystem of open source projects with PyTorch along with our continued investment in JAX and Tensorflow.”

Microsoft Azure

“We’re honored to participate in the PyTorch Foundation and partner with industry leaders to make open source innovation with PyTorch accessible to everyone,” Eric Boyd, CVP, AI Platform, Microsoft, said. “Over the years, Microsoft has invested heavily to create an optimized environment for our customers to create, train and deploy their PyTorch workloads on Azure. Microsoft products and services run on trust, and we’re committed to continuing to deliver innovation that fosters a healthy open source ecosystem that developers love to use. We look forward to helping the global AI community evolve, expand and thrive by providing technical direction based on our latest AI technologies and research.”


“PyTorch was developed from the beginning as an open source framework with first-class support on NVIDIA Accelerated Computing”, said Ian Buck, General Manager and Vice President of Accelerated Computing at NVIDIA. “NVIDIA is excited to be an originating member of the PyTorch Foundation to encourage community adoption and to ensure using PyTorch on the NVIDIA AI platform delivers excellent performance with the best experience possible.”

Additional Resources:

  • Visit to learn more about the project and the PyTorch Foundation
  • Read Jim Zemlin’s blog discussing the PyTorch transition
  • Read Meta AI’s blog about transitioning PyTorch to the Linux Foundation
  • Read this blog from Soumith Chintala, PyTorch Lead Maintainer and AI Researcher at Meta, about the future of the project
  • Join Soumith Chintala and Dr. Ibahim Haddad for a fireside chat on Thursday, September 15, at 3pm GMT / 11am ET / 8am PT
  • Learn more about PyTorch training opportunities from the Linux Foundation
  • Follow PyTorch on Facebook, LinkedIn, Spotify, Twitter, and YouTube

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 3,000 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, PyTorch, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.

Media Contact

Dan Whiting

for the Linux Foundation


The post Meta Transitions PyTorch to the Linux Foundation, Further Accelerating AI/ML Open Source Collaboration appeared first on Linux Foundation.

Welcoming PyTorch to the Linux Foundation

Mon, 09/12/2022 - 21:25

Today we are more than thrilled to welcome PyTorch to the Linux Foundation. Honestly, it’s hard to capture how big a deal this is for us in a single post but I’ll try. 

TL;DR — PyTorch is one of the most important and successful machine learning software projects in the world today. We are excited to work with the project maintainers, contributors and community to transition PyTorch to a neutral home where it can continue to enjoy strong growth and rapid innovation. We are grateful to the team at Meta, where PyTorch was incubated and grew into a massive ecosystem, for trusting the Linux Foundation with this crucial effort. The journey will be epic.

The AI Imperative, Open Source and PyTorch

Artificial Intelligence, Machine Learning, and Deep Learning are critical to present and future technology innovation. Growth around AI and ML communities and the code they generate has been nothing short of extraordinary. AI/ML is also a truly “open source-first” ecosystem. The majority of popular AI and ML tools and frameworks are open source. The community clearly values transparency and the ethos of open source. Open source communities are playing and will play a leading role in development of the tools and solutions that make AI and ML possible — and make it better over time. 

For all of the above reasons, the Linux Foundation understands that fostering open source in AI and ML is a key priority. The Linux Foundation already hosts and works with many projects that are either contributing directly to foundational AI/ML projects (LF AI & Data) or contributing to their use cases and integrating with their platforms. (e.g., LF Networking, AGL, Delta Lake, RISC-V, CNCF, Hyperledger). 

PyTorch extends and builds on these efforts. Obviously, PyTorch is one of the most important foundational platforms for development, testing and deployment of AI/ML and Deep Learning applications. If you need to build something in AI, if you need a library or a module, chances are there is something in PyTorch for that. If you peel back the cover of any AI application, there is a strong chance PyTorch is involved in some way. From improving the accuracy of disease diagnosis and heart attacks, to machine learning frameworks for self-driving cars, to image quality assessment tools for astronomers, PyTorch is there.

Originally incubated by Meta’s AI team, PyTorch has grown to include a massive community of contributors and users under their community-focused stewardship. The genius of PyTorch (and a credit to its maintainers) is that it is truly a foundational platform for so much AI/ML today, a real Swiss Army Knife. Just as developers built so much of the technology we know today atop Linux, the AI/ML community is building atop PyTorch – further enabling emerging technologies and evolving user needs. As of August 2022, PyTorch was one of the five-fastest growing open source software communities in the world alongside the Linux kernel and Kubernetes. From August 2021 through August 2022, PyTorch counted over 65,000 commits. Over 2,400 contributors participated in the effort, filing issues or PRs or writing documentation. These numbers place PyTorch among the most successful open source projects in history.  

Neutrality as a Catalyst

Projects like PyTorch that have the potential to become a foundational platform for critical technology benefit from a neutral home. Neutrality and true community ownership are what has enabled Linux and Kubernetes to defy expectations by continuing to accelerate and grow faster even as they become more mature. Users, maintainers and the community begin to see them as part of a commons that they can rely on and trust, in perpetuity. By creating a neutral home, the PyTorch Foundation, we are collectively locking in a future of transparency, communal governance, and unprecedented scale for all.

As part of the Linux Foundation, PyTorch and its community will benefit from our many programs and support communities like training and certification programs (we already have one in the works), to community research (like our Project Journey Reports) and, of course, community events. Working inside and alongside the Linux Foundation, the PyTorch community also has access to our LFX collaboration portal, enabling mentorships and helping the PyTorch community identify future leaders, find potential hires, and observe shared community dynamics. 

PyTorch has gotten to its current state through sound maintainership and open source community management. We’re not going to change any of the good things about PyTorch. In fact, we can’t wait to learn from Meta and the PyTorch community to improve the experiences and outcomes of other projects in the Foundation. For those wanting more insight about our plans for the PyTorch Foundation, I invite you to join Soumith Chintala (co-creator of PyTorch) and Dr. Ibrahim Haddad (Executive Director of the PyTorch Foundation) for a live discussion on Thursday entitled, PyTorch: A Foundation for Open Source AI/ML.

We are grateful for Meta’s trust in “passing us the torch” (pun intended). Together with the community, we can build something (even more) insanely great and add to the global heritage of invaluable technology that underpins the present and the future of our lives. Welcome, PyTorch! We can’t wait to get started!

The post Welcoming PyTorch to the Linux Foundation appeared first on Linux Foundation.

Open 3D Foundation Welcomes New Members OPPO and Heroic Labs as Community Optimizes Software to Embrace Mobile-First Gaming

Fri, 09/09/2022 - 22:00

Foundation growth driven by organizations seeing new use cases that require modular solutions to build the future of 3D technology

SAN FRANCISCO – September 7, 2022 – As gaming increasingly becomes a mobile-first experience, OPPO and Heroic Labs are joining as Premier and General members, respectively, of the Open 3D Foundation (O3DF). The two companies are working with the community to optimize the open-source Open 3D Engine project for mobile gaming.

OPPO is a global technology company focused on delivering consumer devices, notably mobile phones, and advocating for advancing cloud-native technologies. Heroic Labs is a creator of scalable, social infrastructure for cloud services and app server development. In joining O3DF, OPPO and Heroic Labs will collaborate with other O3DF members to accelerate standardization of 3D graphics development across a diversity of mobile platforms. 

This collaboration will happen inside a newly proposed O3DE (Open 3D Engine) Mobile Device Working Group, through which the O3DE community aims to build portable libraries and interfaces that can be used across a myriad of environments, freely available under the Apache 2.0/MIT license model. We invite all of those interested in shaping the development of 3D graphics standards for mobile devices to review and comment on this open proposal.

“We’re excited to welcome OPPO and Heroic to the community, and we look forward to their contributions in helping advance 3D graphics standards through the O3DE project,” said Royal O’Brien, general manager of Digital Media and Games at the Linux Foundation and executive director of O3DF. “These newest members personify the value of O3DE’s modular architecture, which makes it easier for developers to build 3D solutions that combine the technologies best suited to a diverse set of use cases. Mobile gaming is a great example of how that modular approach fosters extensibility and adaptability from our core technology.” 

“Today, 3D graphics technology has become an essential element of modern society, with application domains ranging from visual effects, gaming and medical imaging to next-generation content like Metaverse,” said Hansen Hong, director of OPPO Software Technology Planning. “We are excited to join the Open 3D Foundation as a Premier member at the early stage of its development. Through our collaboration within the Foundation, we are eager to contribute to the Open 3D Engine with mobile platforms as our focus. Together with the Mobile Device Working Group, we will bring smoother and more user-friendly mobile development experiences to O3DE developers, while generating more efficient yet immersive and realistic rendering applications for mobile users. “

“At the heart of our mission is making game development easy for everyone,” said Mo Firouz, co-founder and chief operations officer at Heroic Labs. “This goal is accelerated by joining O3DF and actively participating in the establishment of 3D graphic development standards that will benefit every level of game creation. Creating this future in community with other O3DF members aligns with our overall commitment to accessibility through open source.”

A Burgeoning Community
Over 25 member companies have joined O3DF since its launch in July 2021. Newest members include OPPO and Heroic Labs, as well as Microsoft, LightSpeed Studios and Epic Games. Other Premier members include Adobe, Amazon Web Services (AWS), Huawei, Intel and Niantic. In May, O3DE announced its latest release, focused on performance, stability and usability enhancements. The O3D Engine community is very active, averaging up to 2 million line changes and 350-450 commits monthly from 60-100 authors across 41 repos.

Attend O3DCon

O3DF will host O3DCon October 17-19 in Austin, Texas. The event will convene a vibrant, diverse community focused on building an unencumbered, first-class, 3D engine poised to revolutionize real-time 3D development across a variety of applications—from game development, metaverse, digital twin and AI, to automotive, healthcare, robotics and more. Early bird pricing expires September 16.

About the Open 3D Engine

Open 3D Engine (O3DE) is the flagship project managed by the Open 3D Foundation (O3DF). The open-source project is a modular, cross-platform 3D engine built to power anything from AAA games to cinema-quality 3D worlds to high-fidelity simulations. The code is hosted on GitHub under the Apache 2.0 license. To learn more, please visit and get involved and connect with the community on and

About the Open 3D Foundation

Established in July 2021, the mission of the Open 3D Foundation (O3DF) is to make an open-source, fully-featured, high-fidelity, real-time 3D engine for building games and simulations, available to every industry. The Open 3D Foundation is home to the O3D Engine project. To learn more, please visit

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The post Open 3D Foundation Welcomes New Members OPPO and Heroic Labs as Community Optimizes Software to Embrace Mobile-First Gaming appeared first on Linux Foundation.

35 Podcasts Recommended by People You Can Trust

Fri, 09/02/2022 - 23:00

Because of my position as Executive Producer and host of The Untold Stories of Open Source, I frequently get asked, “What podcasts do you listen to when you’re not producing your own.” Interesting question. However, my personal preference, This American Life, is more about how they create their shows, how they use sound and music to supplement the narration, and just in general, how Ira Glass does what he does. Only podcast geeks would be interested in that, so I reached out to my friends in the tech industry to ask them what THEY listen to.

The most surprising thing I learned was people professing to not listen to podcasts. “I don’t listen to podcasts, but if I had to choose one…”, kept popping up. The second thing was people in the industry need a break and use podcasts to escape from the mayhem of their day. I like the way Jennifer says it best, “Since much of my role is getting developers on board with security actions, I gravitate toward more psychology based podcasts – Adam Grant’s is amazing (it’s called WorkLife).”

Now that I think of it, same here. This American Life. Revisionist History. Radio Lab. The Moth. You get the picture. Escaping from the mayhem of the day.

Without further digression, here are the podcasts recommended by the people I trust, no particular order. No favoritism.

.avia-image-container.av-17qqqmyi-4a18996ac0d0dc0d5eb3cdacf161d63e .av-image-caption-overlay-center{ color:#ffffff; } The Haunted Hacker

Hosted by Mike Jones and Mike LeBlanc

Mike Jones and Mike LeBlanc built the H4unt3d Hacker podcast and group from a really grass roots point of view. The idea was spawned over a glass of bourbon on the top of a mountain. The group consists of members from around the globe and from various walks of life, religions, backgrounds and is all inclusive. They pride themselves in giving back and helping people understand the cybersecurity industry and navigate through the various challenges one faces when they decide cybersecurity is where they belong.

“I think he strikes a great balance between newbie/expert, current events and all purpose security and it has a nice vibe” – Alan Shimel, CEO, Founder, TechStrong Group

.avia-image-container.av-15zmqdcq-036fb6bd442a64c8b50a4320331184ec .av-image-caption-overlay-center{ color:#ffffff; } Risky Biz Security Podcast

Hosted by Patrick Gray

Published weekly, the Risky Business podcast features news and in-depth commentary from security industry luminaries. Hosted by award-winning journalist Patrick Gray, Risky Business has become a must-listen digest for information security professionals. We are also known to publish blog posts from time to time.

“My single listen-every-week-when-it-comes out is not that revolutionary: the classic Risky Biz security podcast. As a defender, I learn from the offense perspective, and they also aren’t shy about touching on the policy side.” – Allan Friedman, Cybersecurity and Infrastructure Security Agency

.avia-image-container.av-l6l4c8eg-f7fed3a82e2d12c4acbccedc491ed16c .av-image-caption-overlay-center{ color:#ffffff; } Application Security Weekly

Hosted by Mike Shema, Matt Alderman, and John Kinsella

If you’re looking to understand DevOps, application security, or cloud security, then Application Security Weekly is your show! Mike, Matt, and John decrypt application development  – exploring how to inject security into the organization’s Software Development Lifecycle (SDLC); learn the tools, techniques, and processes necessary to move at the speed of DevOps, and cover the latest application security news.

“Easily my favorite hosts and content. Professional production, big personality host, and deeply technical co-host. Combined with great topics and guests.” – Larry Maccherone, Dev[Sec]Ops Transformation Architect, Contrast Security

.avia-image-container.av-l6l6y6rj-a1dd351cf91a5370c6879543d6db5a36 .av-image-caption-overlay-center{ color:#ffffff; } Azure DevOps Podcast

Hosted by Jeffrey Palermo

The Azure DevOps Podcast is a show for developers and devops professionals shipping software using Microsoft technologies. Each show brings you hard-hitting interviews with industry experts innovating better methods and sharing success stories. Listen in to learn how to increase quality, ship quickly, and operate well.

“I am pretty focused on Microsoft Azure these days so on my list is Azure DevOps” – Bob Aiello CM Best Practices Founder, CTO, and Principal Consultant

.avia-image-container.av-l6l7cpsy-0204061d8ec5cf596731e86942e9c72a .av-image-caption-overlay-center{ color:#ffffff; } Chaos Community Broadcast

Hosted by Community of Chaos Engineering Practitioners

We are a community of chaos engineering practitioners. Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

“This is so good, it’s hardly even fair to compare it to other podcasts!” – Casey Rosenthal, CEO, Co-founder, Verica

.avia-image-container.av-14el1ra2-35b266106ec3b7d5d179864f840f0f63 .av-image-caption-overlay-center{ color:#ffffff; } The Daily Beans. News. With Swearing

Hosted by Allison Gill (A.G.)

The Daily Beans is a women-owned and operated progressive news podcast for your morning commute brought to you by the webby award-winning hosts of Mueller, She Wrote. Get your social justice and political news with just the right amount of snark.

“The Daily Beans covers political news without hype. The host is a lawyer and restricts her coverage to what can actually happen while other outlets are hyping every possibility under the sun including possibilities that get good ratings but will never happen. She mostly covers the former president’s criminal cases.” – Tom Limoncelli, Manager, Stack Overflow

.avia-image-container.av-12c7tvcq-7743f99a1675553e9b1bf23b3fdc8ed9 .av-image-caption-overlay-center{ color:#ffffff; } Software Engineering Radio

Hosted by Community of Various Contributors

Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Now a weekly show, we talk to experts from throughout the software engineering world about the full range of topics that matter to professional developers. All SE Radio episodes feature original content; we don’t record conferences or talks given in other venues.

“The one that I love to keep tabs on is called Software Engineering Radio, published by the IEEE computer society. It is absolutely a haberdashery of new ideas, processes, lessons learned. It also ranges from very practical action oriented advice the whole way over to philosophical discussions that are necessary for us to drive innovation forward. Professionals from all different domains contribute. It’s not a platform for sales and marketing pitches!” – Tracy Bannon, Senior Principal/ Software Architect & DevOps Advisor, MITRE

.avia-image-container.av-10q7htve-1120697f37cc5d0d47777394143136a1 .av-image-caption-overlay-center{ color:#ffffff; } Cybrary Podcast

Hosted by Various Contributors

Join thousands of other listeners to hear from the current leaders, experts, vendors, and instructors in the IT and Cybersecurity fields regarding DevSecOps, InfoSec, Ransomware attacks, the diversity and the retention of talent, and more. Gain the confidence, consistency, and courage to succees at work and in life.

“Relaxed chat, full of good info, and they got right to the point. Would recommend.” – Wendy Nather, Head of Advisory CISOs, CISCO

.avia-image-container.av-yr0pgey-d053f83d77ede3141c0e59b8ebc406be .av-image-caption-overlay-center{ color:#ffffff; } Open Source Underdogs

Hosted by Michael Schwartz

Open Source Underdogs is the podcast for entrepreneurs about open source software. In each episode, we chat with a founder or leader to explore how they are building thriving businesses around open source software. Our goal is to demystify how entrepreneurs can stay true to their open source objectives while also building sustainable, profitable businesses that fuel innovation and ensure longevity.

“Mike Schwartz’s podcast is my favourite. Really good insights from founders.” – Amanda Brock, CEO, OpenUK

.avia-image-container.av-3ay3e0q-138376a5fbc14179da71cb470cf4ef05 .av-image-caption-overlay-center{ color:#ffffff; } Ten Percent Happier

Hosted by Dan Harris

Ten Percent Happier publishes a variety of podcasts that offer relatable wisdom designed to help you meet the challenges and opportunities in your daily life.

“I listen to Ten Percent Happier as my go-to podcast. It helps me with mindfulness practice, provides a perspective on real-life situations, and makes me a kinder person. That is one of the most important traits we all need these days.” – Arun Gupta, Vice President and General Manager for Open Ecosystem, Intel

.avia-image-container.av-uvb3s7u-734948b13d8094029d19e81852fd3945 .av-image-caption-overlay-center{ color:#ffffff; } Making Sense

Hosted by Sam Harris

Sam Harris is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. His writing and public lectures cover a wide range of topics—neuroscience, moral philosophy, religion, meditation practice, human violence, rationality—but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

“Sam dives deep on topics rooted in our culture, business, and minds. The conversations are very approachable and rational. With some episodes reaching an hour or more, Sam gives topics enough space to cover the necessary angles.” – Derek Weeks, CMO, The Linux Foundation

.avia-image-container.av-t6zjcd6-0ab04ffc322c45a5038f81a336e10982 .av-image-caption-overlay-center{ color:#ffffff; } Darknet Diaries

Hosted by Jack Rhysider

Darknet Diaries produces audio stories specifically intended to capture, preserve, and explain the culture around hacking and cyber security in order to educate and entertain both technical and non-technical audiences.

This is a podcast about hackers, breaches, shadow government activity, hacktivism, cybercrime, and all the things that dwell on the hidden parts of the network.

“Darknet Diaries would be my recommendation. Provided insights into the world of hacking, data breaches and cyber crime. And Jack Rhysider is a good storyteller ” – Edwin Kwan, Head of Application Security and Advisory, Tyro Payments

.avia-image-container.av-r85wndm-19c0c4071af16a25769b2e29b9dea3fc .av-image-caption-overlay-center{ color:#ffffff; } Under the Skin

Hosted by Russel Brand

Under the Skin asks: what’s beneath the surface – of the people we admire, of the ideas that define our times, of the history we are told. Speaking with guests from the world of academia, popular culture and the arts, they’ll teach us to see the ulterior truth behind or constructed reality. And have a laugh.

“He interviews influential people from all different backgrounds and covers everything from academia to tech to culture to spiritual issues” – Ashleigh Auld, Global Director Partner Marketing, Linnwood

.avia-image-container.av-pj9318a-4f208a73acaf309c021b989f8754b902 .av-image-caption-overlay-center{ color:#ffffff; } Cyberwire Daily

Hosted by Dave Bittner

The daily cybersecurity news and analysis industry leaders depend on. Published each weekday, the program also included interviews with a wide spectrum of experts from industry, academia, and research organizations all over the world.

“I’d recommend the CyberWire daily podcast has got most relevant InfoSec news items and stories industry pros care about. XX” – Ax Sharma, Security Researcher, Tech Reporter, Sonatype

.avia-image-container.av-no5pl5m-3bac8b9bb4883006b110ca1f6ad5c8a7 .av-image-caption-overlay-center{ color:#ffffff; } 7 Minute Security Podcast

Hosted by Brian Johnson

7 Minute Security is a weekly audio podcast (once in a while with video!) released on Wednesdays and covering topics such Penetration testing, Blue teaming, and Building a career in security.

In 2013 I took on a new adventure to focus 100% on information security. There’s a ton to learn, so I wanted to write it all down in a blog format and share with others. However, I’m a family man too, and didn’t want this project to offset the work/family balance.

So I thought a podcast might fill in the gaps for stuff I can’t – or don’t have time to – write out in full form. I always loved the idea of a podcast, but the good ones are usually in a longer format, and I knew I didn’t have time for that either. I was inspired by the format of the 10 Minute Podcast and figured if it can work for comedy, maybe it can work for information security!

Thus, the 7 Minute Security blog and its child podcast was born.

“7 Minute Security Podcast – because Brian makes the best jingles!” – Björn Kimminich, Product Group Lead Architecture Governance, Kuehne + Nagel (AG & Co.) KG

.avia-image-container.av-lmyn6uy-0c7d42a1c87af2e54a84b4946a2807fb .av-image-caption-overlay-center{ color:#ffffff; } Continuous Delivery

Hosted by Dave Farley

Explores ideas that help to produce Better Software Faster: Continuous Delivery, DevOps, TDD and Software Engineering.

Hosted by Dave Farley – a software developer who has done pioneering work in DevOps, CD, CI, BDD, TDD and Software Engineering. Dave has challenged conventional thinking and led teams to build world class software.

Dave is co-author of the award wining book – “Continuous Delivery”, and a popular conference speaker on Software Engineering. He built one of the world’s fastest financial exchanges, is a pioneer of BDD, an author of the Reactive Manifesto, and winner of the Duke award for open source software – the LMAX Disruptor.

“Dave Farley’s videos are a treasure trove of knowledge that took me and others years to uncover when we were starting out. His focus on engineering and business outcomes rather than processes and frameworks is a breath of fresh air. If you only have time for one source of information, use his. – Bryan Finster, Value Stream Architect, Defense Unicorns

.avia-image-container.av-jweeije-675d7470bb9bc4a533c1c8381e2411d6 .av-image-caption-overlay-center{ color:#ffffff; }

The Prof G Show

Hosted by Scott Galloway

A fast and fluid weekly thirty minute show where Scott tears into the taxonomy of the tech business with unfiltered, data-driven insights, bold predictions, and thoughtful advice.

“Very current very modern. Business and tech oriented. Talks about markets and economics and people and tech.” – Caroline Wong, Chief Strategy Officer, Cobalt

.avia-image-container.av-icqbb5m-4400ad2ebd7ecda77e6bee815b008ca0 .av-image-caption-overlay-center{ color:#ffffff; } Open Source Security Podcast

Hosted by Josh Bressers and Kurt Seifried

Open Source Security is a collaboration by Josh Bressers and Kurt Seifried. We publish the Open Source Security Podcast and the Open Source Security Blog.

We have a security tabletop game that Josh created some time ago. Rather than play a boring security tabletop exercise, what if had things like dice and fun? Take a look at the Dungeons and Data tabletop game

“It has been something I’ve been listening to a lot lately with all of the focus on Software Supply Chain Security and Open Source Security. The hosts have very deep software and security backgrounds but keep the show light-hearted and engaging as well. ” – Chris Hughes, CISO, Co-Founder Aquia Inc

.avia-image-container.av-giw4nwa-3e3706a6cf64a728ade81f57b6a3c150 .av-image-caption-overlay-center{ color:#ffffff; } Pivot

Hosted by Kara Swisher and Professor Scott Galloway

Every Tuesday and Friday, tech journalist Kara Swisher and NYU Professor Scott Galloway offer sharp, unfiltered insights into the biggest stories in tech, business, and politics. They make bold predictions, pick winners and losers, and bicker and banter like no one else. After all, with great power comes great scrutiny. From New York Magazine and the Vox Media Podcast Network.

“As a rule, I don’t listen to tech podcasts much at all, since I write about tech almost all day. I check out podcasts about theater or culture — about as far away from my day job as I can get. However, I follow a ‘man-about-town’ guy named George Hahn on social media, who’s a lot of fun. Last year, he mentioned he’d be a guest host of the ‘Pivot’ podcast with Kara Swisher and Scott Galloway, so I checked out Pivot. It’s about tech but it’s also about culture, politics, business, you name it. So that’s become the podcast I dip into when I want to hear a bit about tech, but in a cocktail-party/talk show kind of way.” – Christine Kent, Communications Strategist, Christine Kent Communications

.avia-image-container.av-eexgg3e-a387e541d55f914c56692a07d20b2be7 .av-image-caption-overlay-center{ color:#ffffff; } The Idealcast

Hosted by Gene Kim

Conversations with experts about the important ideas changing how organizations compete and win. In The Idealcast, multiple award-winning CTO, researcher and bestselling author Gene Kim hosts technology and business leaders to explore the dangerous, shifting digital landscape. Listeners will hear insights and gain solutions to help their enterprises thrive in an evolving business world.

“I like this because it has a good balance of technical and culture/leadership content.” – Courtney Kissler, CTO, Zulily

.avia-image-container.av-cq0kosa-38fbe959f3ef6cb7325cc2348ad3b419 .av-image-caption-overlay-center{ color:#ffffff; } TrustedSec Security Podcast

Hosted by Dave Kennedy and Various Team Contributors

Our team records a regular podcast covering the latest security news and stories in an entertaining and informational discussion. Hear what our experts are thinking and talking about.

“I LOVE LOVE LOVE the TrustedSec Security Podcast. Dave Kennedy’s team puts on a very nice and often deeply technical conversation every two weeks. The talk about timely topics from today’s headlines as well as jumping into purple team hackery which is a real treat to listen in and learn from.” – CRob Robinson, Director of Security Communications Intel Product Assurance and Security, Intel

.avia-image-container.av-au3ba56-5fb063234f87e2f1dd570eb9166abc47 .av-image-caption-overlay-center{ color:#ffffff; } Profound Podcast

Hosted by John Willis

Ramblings about W. Edwards Deming in the digital transformation era. The general idea of the podcast is derived from Dr. Demming’s seminal work described in his New Economics book – System of Profound Knowledge ( SoPK ). We’ll try and get a mix of interviews from IT, Healthcare, and Manufacturing with the goal of aligning these ideas with Digital Transformation possibilities. Everything related to Dr. Deming’s ideas is on the table (e.g., Goldratt, C.I. Lewis, Ohno, Shingo, Lean, Agile, and DevOps).

“I don’t listen to podcasts much these days (found that consuming books via audible was more useful… but I guess it all depends on how emerging the topics are you are interested in). I only mention this as I am thin I recommendations. I’d go with John Willis’s Profound or Gene Kim’s Idealcast. Some overlap in (world class) guests but different interview approaches and perspectives.” – Damon Edwards, Sr. Director, Product PagerDuty

.avia-image-container.av-8yesy3e-577769d80ec789c30e572270bd51b54c .av-image-caption-overlay-center{ color:#ffffff; } Security Now

Hosted by Steve Gibson and Leo Laporte

Stay up-to-date and deepen your cybersecurity acumen with Security Now. On this long-running podcast, cybersecurity authority Steve Gibson and technology expert Leo Laporte bring their extensive and historical knowledge to explore digital security topics in depth. Each week, they take complex issues and break them down for clarity and big-picture understanding. And they do it all in an approachable, conversational style infused with their unique sense of humor. Listen and subscribe, and stay on top of the constantly changing world of Internet security. Security Now records every Tuesday afternoon and hits your podcatcher later that evening.

“The shows cover a wide range of security topics, from the basics of technologies such as DNSSec & Bitcoin, to in depth, tech analysis of the latest hacks hitting the news, The main host, Steve Gibson, is great at breaking down tech subjects over an audio . It’s running at over 800 episodes now, regular as clockwork every week, so you can rely on it. Funnily Steve Gibson has often reminded me of you – able to assess what’s going on with a subject, calmly find the important points, and describe them to the rest of us in way that’s engaging and relatable.medium – in a way you can follow and be interested in during your commute or flight.” – Gary Robinson, Chief Security Officer, Ulseka

.avia-image-container.av-75hlpqy-9733a851db993f166539717f2d45517f .av-image-caption-overlay-center{ color:#ffffff; } The Jordan Harbinger Show

Hosted by Jordan Harbinger

Today, The Jordan Harbinger Show has over 15 million downloads per month and features a wide array of guests like Kobe Bryant, Moby, Dennis Rodman, Tip “T.I.” Harris, Tony Hawk, Cesar Millan, Simon Sinek, Eric Schmidt, and Neil deGrasse Tyson, to name a few. Jordan continues to teach his skills, for free, at 6-Minute Networking. In addition to hosting The Jordan Harbinger Show, Jordan is a consultant for law enforcement, military, and security companies and is a member of the New York State Bar Association and the Northern California Chapter of the Society of Professional Journalists.

“Excellent podcasts where he interviews people from literally every walk of life, how they have become successful, why they have failed (if they have) as well as great personal development coaching ideas.” – Jeff DeVerter, CTO, Products and Services, RackSpace

.avia-image-container.av-559yly2-701ef6bc2dca3ef855a2adb8e72690a3 .av-image-caption-overlay-center{ color:#ffffff; } WorkLife with Adam Grant

Hosted by Adam Grant

Adam hosts WorkLife, a chart-topping TED original podcast. His TED talks on languishing, original thinkers, and givers and takers have been viewed more than 30 million times. His speaking and consulting clients include Google, the NBA, Bridgewater, and the Gates Foundation. He writes on work and psychology for the New York Times, has served on the Defense Innovation Board at the Pentagon, has been honored as a Young Global Leader by the World Economic Forum, and has appeared on Billions.

“I don’t listen to many technical podcasts. I like Caroline Wongs and have listened to it a number of times (Humans of InfoSec) but since much of my role is getting developers on board with security actions, I gravitate toward more psychology based podcasts – Adam Grant’s is amazing (it’s called WorkLife).” – Jennifer Czaplewski, Senior Director, Cyber Security, Target

“You know lately I have been listening to WorkLife with Adam Grant. Not a tech podcast but a management one.” – Paula Thrasher, Senior Director Infrastructure, PagerDuty

.avia-image-container.av-3glamsa-7118ea9faeb3a9bd9cabd5f4d7d3ddbb .av-image-caption-overlay-center{ color:#ffffff; } SRE Prodcast

Hosted by Core Team Members:  Betsy Beyer, MP English, Salim Virji, Viv

The Google Prodcast Team has gone through quite a few iterations and hiatuses over the years, and many people have had a hand in its existence. For the longest time, a handful of SREs produced the Prodcast for the listening pleasure of the other engineers here at Google.

We wanted to make something that would be of interest to folks across organizations and technical implementations. In his last act as part of the Prodcast, JTR put us in touch with Jennifer Petoff, Director of SRE Education, in order to have the support of the SRE organization behind us.

“The SRE Prodcast is Google’s podcast about Site Reliability Engineering and production software. In Season 1, we discuss concepts from the SRE Book with experts at Google.” – Jennifer Petoff, Director, Program Management, Cloud Technical Education Google

.avia-image-container.av-1pa55x6-e7b263c970b6aca7b2840944d6d8d269 .av-image-caption-overlay-center{ color:#ffffff; } Make Me Smart

Hosted by Kai Ryssdal And Kimberly Adams

Every weekday, Kai Ryssdal and Kimberly Adams break down the news in tech, the economy and culture. How do companies make money from disinformation? How can we tackle student debt? Why do 401(k)s exist? What will it take to keep working moms from leaving the workforce? Together, we dig into complex topics to help make today make sense

“I literally learn 3 new things about topics i never would have tried to learn about.” – Kadi Grigg, Enablement Specialist, Sonatype

.avia-image-container.av-l6y2x04u-fce910e15aca4447bd39dd8499b28919 .av-image-caption-overlay-center{ color:#ffffff; } EconTalk

Hosted by Russ Roberts

Conversations for the Curious is an award-winning weekly podcast hosted by Russ Roberts of Shalem College in Jerusalem and Stanford’s Hoover Institution. The eclectic guest list includes authors, doctors, psychologists, historians, philosophers, economists, and more. Learn how the health care system really works, the serenity that comes from humility, the challenge of interpreting data, how potato chips are made, what it’s like to run an upscale Manhattan restaurant, what caused the 2008 financial crisis, the nature of consciousness, and more.

“The only podcast I listen to is actually EconTalk, which has nothing to do with tech!” – Kelly Shortridge, Senior Principal, Product Technology, Fastly

.avia-image-container.av-l6y2zyoi-46121d487109bdfab1c116d1a46a4896 .av-image-caption-overlay-center{ color:#ffffff; } Leading the Future of Work

Hosted by Jacob Morgan

The Future of Work With Jacob Morgan is a unique show that explores how the world of
work is changing, and what we need to do in order to thrive. Each week several episodes are
released which range from long-form interviews with the world’s top business leaders and
authors to shorter form episodes which provide a strategy or tip that listeners can apply to
become more successful.

The show is hosted by 4x best-selling author, speaker and futurist Jacob Morgan and the
goal is to give listeners the inspiration, the tools, and the resources they need to succeed
and grow at work and in life.

Episodes are not scripted which makes for fun, authentic, engaging, and educational
episodes filled with insights and practical advice.

“It is hard for me to keep up with podcasts. The one I listen to regularly is “Leading The Future of Work” by Jacob Morgan. I know it is not technical, but I think it is extremely important for technical people to understand what the business thinks and is concerned about.” – Keyaan Williams, Managing Director, CLASS-LLC

.avia-image-container.av-l6y31m9n-aaa0a39586953a245725e38c846507f8 .av-image-caption-overlay-center{ color:#ffffff; } Hacking Humans

Hosted by Dave Bittner and Joe Carrigan

Deception, influence, and social engineering in the world of cyber crime.

Join Dave Bittner and Joe Carrigan each week as they look behind the social engineering scams, phishing schemes, and criminal exploits that are making headlines and taking a heavy toll on organizations around the world.

“In case we needed any reminders that humanity is a scary place.” – Matt Howard, SVP and CMO, Virtu

.avia-image-container.av-l6y33c7z-bb5a79ad7d182a2135ca6209fa716811 .av-image-caption-overlay-center{ color:#ffffff; } Cloud SecurityPodcast

Hosted by Ashish Rajan, Shilpi Bhattacharjee, and Various Contributors

Cloud Security Podcast is a WEEKLY Video and Audio Podcast that brings in-depth cloud security knowledge to you from the best and brightest cloud security experts and leaders in the industry each week over our LIVE STREAMs.

We are the FIRST podcast that carved the niche for Cloud Security in late 2019. As of 2021, the large cloud service providers (Azure, Google Cloud, etc.) have all followed suit and started their own cloud security podcasts. While we recommend you listen to their podcasts as well, we’re the ONLY VENDOR NEUTRAL podcast in the space and will preserve our neutrality indefinitely.

“I really love Ashish’s cloud security podcast, listened to it for a while now. He gets really good people on it and it’s a nice laid back listen, too.” – Simon Maple, Field CTO, Snyk

.avia-image-container.av-l6y34iei-e60dc88e51f22de7e62a398d9cf60ae1 .av-image-caption-overlay-center{ color:#ffffff; } DSO Overflow

Hosted by Glenn Wilson, Steve Giguere, Jessica Cregg

In depth conversations with influencers blurring the lines between Dev, Sec, and Ops!

We speak with professionals working in cyber security, software engineering and operations to talks about a number of DevSecOps topics. We discuss how organisations factor security into their product delivery cycles without compromising the value of doing DevOps and Agile.

“One of my favourite meetups in London ‘DevSecOps London Gathering’ has a podcast where they invite their speakers” – Stefania Chaplin, Solutions Architect UK&I, GitLab

.avia-image-container.av-l6y35wkd-beb92d05021857b7ae31048922cb5231 .av-image-caption-overlay-center{ color:#ffffff; } Pardon the Interruption

Hosted by Tony Kornheiser and Mike Wilbon

Longtime sportswriters Tony Kornheiser and Mike Wilbon debate and discuss the hottest topics, issues and events in the world of sports in a provocative and fast-paced format.

Similar in format to Gene Siskel and Roger Ebert‘s At the Movies,[2][3] PTI is known for its humorous and often loud tone, as well as the “rundown” graphic which lists the topics yet to be discussed on the right-hand side of the screen. The show’s popularity has led to the creation of similar shows on ESPN and similar segments on other series, and the rundown graphic has since been implemented on the morning editions of SportsCenter, among many imitators.[4] – Wikipedia

“I’m interested in sports, and Tony and Mike are well-informed, amusing, and opinionated. It also doesn’t hurt any that I’ve known them since they were at The Washington Post and I was freelancing there. What you see on television, or hear on their podcast, is exactly how they are in real life. This sincerity of personality is a big reason why they’ve become so successful.” – Steven Vaughan-Nichols, Technology and business journalist and analyst. Red Ventures

The post 35 Podcasts Recommended by People You Can Trust appeared first on Linux Foundation.

You want content? We’ve got your content right here!

Fri, 09/02/2022 - 22:47
ONE Summit Agenda is now live!

This post originally appeared on LF Networking’s blog. The author, Heather Kirksey, is VP Community & Ecosystem. ONE Summit is the Linux Foundation Networking event that focuses on the networking and automation ecosystem that is transforming public and private sector innovation across 5G network edge, and cloud native solutions. Our family of open source projects address every layer of infrastructure needs from the user edge to the cloud/core. Attend ONE Summit to get the scoop on hot topics for 2022!

Today LF Networking announced our schedule for ONE Summit, and I have to say that I’m extraordinarily excited. I’m excited because it means we’re growing closer to returning to meeting in-person, but more importantly I was blown away by the quality of our speaking submissions. Before I talk more about the schedule itself, I want to say that this quality is all down to you: You sent us a large number of thoughtful, interesting, and innovative ideas; You did the work that underpins the ideas; You did the work to write them up and submit them. The insight, lived experience, and future-looking thought processes humbled me with its breadth and depth. You reminded me why I love this ecosystem and the creativity within open source. We’ve all been through a tough couple of years, but we’re still here innovating, deploying, and doing work that improves the world. A huge shout out to everyone across every company, community, and project that made the job of choosing the final roster just so difficult.

Now onto the content itself. As you’ve probably heard, we’ve got 5 tracks: Industry 4.0, Security and Privacy, The New Networking Stack, Operationalizing Deployment, and Emerging Technologies and Business Models:

  • “Industry 4.0” looks at the confluence of edge and networking technologies that enable technology to uniquely improve our interactions with the physical world, whether that’s agriculture, manufacturing, robotics, or our homes. We’ve got a great line-up focused both on use cases and the technologies that enable them.
  • “Security and Privacy” are the most important issues with which we as global citizens and we as an ecosystem struggle. Far from being an afterthought, security is front and center as we look at zero-trust and vulnerability management, and which technologies and policies best serve enterprises and consumers.
  • Technology is always front and center for open source groups and our “New Networking Stack” track dives deep into the technologies and components we will all use as we build the infrastructure of the future. In this track we have a number of experts sharing their best practices, as well as ideas for forward-looking usages.
  • In our “Operationalizing Deployment” track, we learn from the lived experience of those taking ideas and turning them into workable reality. We ask questions like,  How do you bridge cultural divides? How do you introduce and truly leverage DevOps? How do you integrate compliance and reference architectures? How do you not only deploy but bring in Operations? How do you automate and how to you use tools to accomplish digital transformation in our ecosystem(s)?
  • Not just content focusing only on today’s challenges and success, we look ahead with “Emerging Technologies and Business Models.” Intent, Metaverse, MASE, Scaling today’s innovation to be tomorrow’s operations, new takes on APIs – these are the concepts that will shape us in the next 5-10 years; we  talk about how we start approaching and understanding them?

Every talk that made it into this program has unique and valuable insight, and I’m so proud to be part of the communities that proposed them. I’m also honored to have worked with one of the best Programming Committees in open source events ever. These folks took so much time and care to provide both quantitative and qualitative input that helped shape this agenda. Please be sure to thank them for their time because they worked hard to take the heart of this event to the next level. If you want to be in the room and in the hallway with these great speakers, there is only ONE place to be. Early bird registration ends soon, so don’t miss out and register now!

And please don’t forget to sponsor. Creating a space for all this content does cost money, and we can’t do it without our wonderful sponsors. If you’re still on the fence, please consider how amazing these sessions are and the attendee conservations they will spark. We may not be the biggest conference out there, but we are the most focused on decision makers and end users and the supply chains that enable them. You won’t find a more engaged and thoughtful audience anywhere else.

Click here to add your own text

The post You want content? We’ve got your content right here! appeared first on Linux Foundation.

Is it time for an OSPO in your organization?

Fri, 09/02/2022 - 22:11

Is your organization consuming open source software, or is it starting to contribute to open source projects? If so, perhaps it’s time for you to start an OSPO: an open source program office.

At the LF, we’re dedicating resources to improving your understanding of all things open source, such as our Guide to Enterprise Open Source and the Evolution of the Open Source Program Office, published the last year. 

In a new Linux Foundation Research report, A Deep Dive into Open Source Program Offices, published in partnership with the TODO Group, authored by Dr. Ibrahim Haddad, Ph.D, showcases the many forms of OSPOs, their maturity models, responsibilities, and challenges they face in open source enterprise adoption, and also their staffing requirements are discussed in detail. 

“The past two decades have accelerated open source software adoption and increased involvement in contributing to existing projects and creating new projects. Software is where a lot of value lies and the vast majority of software developed is open source software providing access to billions of dollars worth of external R&D. If your organization relies on open source software for products or services and does not have a formalized OSPO yet ​​to manage all aspects of working with open source, please consider this report a call to establish your OPSO and drive for leadership in the open source areas that are critical to your products and services.”Dr. Ibrahim Haddad, Ph.D., General Manager, LF AI & Data Foundation

Download Report

Here are some of the report’s important lessons:

An OSPO can help you manage and track your company’s use of open source software and assist you when interacting with other stakeholders. It can also serve as a clearinghouse for information about open source software and its usage throughout your organization.

Your OSPO is the central nervous system for an organization’s open source strategy and provides governance, oversight, and support for all things related to open source.

OSPOs create and maintain an inventory of your open source software (OSS) assets and track and manage any associated risks. The OSPO also guides how to best use open source software within the organization and can help coordinate external contributions to open source projects.

To be effective, the OSPO needs to have a deep understanding of the business and the technical aspects of open source software. It also needs to work with all levels of the organization, from executives to engineers.

An OSPO is designed to:

  • Be the center of competency for an organization’s open source operations and structure,
  • Place a strategy and set of policies on top of an organization’s open source efforts.

This can include creating policies for code use, distribution, selection, auditing, and other areas; training developers; ensuring legal compliance, and promoting and building community engagement to benefit the organization strategically.

An organization’s OSPO can take many different forms, but typically it is a centralized team that reports to the company’s executive level. The size of the team will depend on the size and needs of the organization, and how it is adopted also will undergo different stages of maturity.

When starting, an OSPO might just be a single individual or a very small team. As the organization’s use of open source software grows, the OSPO can expand to include more people with different specialties. For example, there might be separate teams for compliance, legal, and community engagement.

This won’t be the last we have to say about the OSPO in 2022. There are further insights in development, including a qualitative study on the OSPO’s business value across different sectors, and the TODO group’s publication of the 2022 OSPO Survey results will take place during OSPOCon in just a few weeks. 

There is no board template to build an OSPO. Its creation and growth can vary depending on the organization’s size, culture, industry, or even its milestones.

That’s why I keep seeing more and more open source leaders finding critical value in building connections with other professionals in the industry. OSPOCon is an excellent networking and learning space where those working (or willing to work) in open source program offices that rely on open source technologies come together to learn and share best practices, experiences, and tools to overcome challenges they face.” Ana Jiménez, OSPO Program Manager at TODO Group

Join us there and be sure to read the report today to gain key insights into forming and running an OSPO in your organization. 

The post Is it time for an OSPO in your organization? appeared first on Linux Foundation.

Addressing Cybersecurity Challenges in Open Source Software: What you need to know

Fri, 09/02/2022 - 01:16

by Ashwin Ramaswami

June 2022 saw the publication of Addressing Cybersecurity Challenges in Open Source Software, a joint research initiative launched by the Open Source Security Foundation in collaboration with Linux Foundation Research and Snyk. The research dives into security concerns in the open source ecosystem. If you haven’t read it, this article will give you the report’s who, what, and why, summarizing its key takeaways so that it can be relevant to you or your organization.

Who is the report for?

This report is for everyone whose work touches open source software. Whether you’re a user of open source, an OSS developer, or part of an OSS-related institution or foundation, you can benefit from a better understanding of the state of security in the ecosystem.

Open source consumers and users: It’s very likely that you rely on open source software as dependencies if you develop software. And if you do, one important consideration is the security of the software supply chain. Security incidents such as log4shell have shown how open source supply chain security touches nearly every industry. Even industries and organizations that have traditionally not focused on open source software now realize the importance of ensuring their OSS dependencies are secure. Understanding the state of OSS security can help you to manage your dependencies intelligently, choose them wisely, and keep them up to date.

Open source developers and maintainers: People and organizations that develop or maintain open source software need to ensure they use best practices and policies for security. For example, it can be valuable for large organizations to have open source security policies. Moreover, many OSS developers also use other open source software as dependencies, making understanding the OSS security landscape even more valuable. Developers have a unique role to play in leading the creation of high-quality code and the respective governance frameworks and best practices around it.

Institutions: Institutions such as open source foundations, funders, and policymaking groups can benefit from this report by understanding and implementing the key findings of the research and their respective roles in improving the current state of the OSS ecosystem. Funding and support can only go to the right areas if priorities are informed by the problems the community is facing now, which the research assists in identifying.

What are the major takeaways?

The data from this report was collected by conducting a worldwide survey of:

  • Individuals who contribute to, use, or administer OSS;
  • Maintainers, core contributors, and occasional contributors to OSS;
  • Developers of proprietary software who use OSS; and
  • Individuals with a strong focus on software supply chain security

The survey also included data collected from several major package ecosystems by using Snyk Open Source, a static code analysis (SCA) tool free to use for individuals and open source maintainers.

Here are the major takeaways and recommendations from the report:

  • Too many organizations are not prepared to address OSS security needs: At least 34% of organizations did not have an OSS security policy in place, suggesting these organizations may not be prepared to address OSS security needs.
  • Small organizations must prioritize developing an OSS security policy: Small organizations are significantly less likely to have an OSS security policy. Such organizations should prioritize developing this policy and having a CISO and OSPO (Open Source Program Office).
  • Using additional security tools is a leading way to improve OSS security: Security tooling is available for open source security across the software development lifecycle. Moreover, organizations with an OSS security policy have a higher frequency of security tool use than those without an OSS security policy.
  • Collaborate with vendors to create more intelligent security tools: Organizations consider that one of the most important ways to improve OSS security across the supply chain is adding greater intelligence to existing software security tools, making it easier to integrate OSS security into existing workflows and build systems.
  • Implementing best practices for secure software development is the other leading way to improve OSS security: Understanding best practices for secure software development, through courses such as the OpenSSF’s Secure Software Development Fundamentals Courses, has been identified repeatedly as a leading way to improve OSS supply chain security.
  • Use automation to reduce your attack surface: Infrastructure as Code (IaC) tools and scanners allow automating CI/CD activities to eliminate threat vectors around manual deployments.
  • Consumers of open source software should give back to the communities that support them: The use of open source software has often been a one-way street where users see significant benefits with minimal cost or investment. For larger open source projects to meet user expectations, organizations must give back and close the loop by financially supporting OSS projects they use.
Why is this important now?

Open source software is a boon: its collaborative and open nature has allowed society to benefit from various innovative, reliable, and free software tools. However, these benefits only last when users contribute back to open source software and when users and developers exercise due diligence around security. While the most successful open source projects have gotten such support, other projects have not – even as open source use has continued to be more ubiquitous.

Thus, it is more important than ever to be aware of the problems and issues everyone faces in the OSS ecosystem. Some organizations and open source maintainers have strong policies and procedures for handling these issues. But, as this report shows, other organizations are just facing these issues now.

Finally, we’ve seen the risks of not maintaining proper security practices around OSS dependencies. Failure to update open source dependencies has led to costs as high as $425 million. Given these risks, a little investment in strong security practices and awareness around open source – as outlined in the report’s recommendations – can go a long way.

We suggest you read the report – then see how you or your organization can take the next step to keep yourself secure!

Download Report

The post Addressing Cybersecurity Challenges in Open Source Software: What you need to know appeared first on Linux Foundation.

The Network Evolves: ONE Summit Presents Collaborative and Transformative Program Across Networking, Edge, IoT

Thu, 09/01/2022 - 00:00
  • Industry experts will share their knowledge across 5G, factory floor, agriculture, government, Smart Home, and Robotics use cases
  • Speakers from  50+ companies, 20 end users, 16 countries during ONE Summit 
  • Industry experts across the expanding open networking and edge ecosystems confirmed to present insights during ONE Summit North America, November 15-16, in Seattle, WA

SAN FRANCISCO, August 31, 2022 LF Networking, the facilitator of collaboration and operational excellence across open source networking projects, announced the ONE Summit North America 2022 session schedule is now available. Taking place in Seattle, WA November 15-16, ONE Summit is the one  industry event that brings together decision makers and implementers for two days of in-depth presentations and interactive conversations around 5G, Access, Edge, Telco, Cloud, Enterprise Networking, and more open source technology developments. 

“LF Networking is proud to set a high bar with the quality of content submissions for this year’s ONE Summit, and to offer an innovative line-up of diverse sessions,” said Arpit Joshipura, General Manager, Networking, Edge, and IoT, the Linux Foundation. “We will also touch on gaming, robotics, 5G network automation, factory floor, agriculture and more, with a strong program based on the power of connectivity.” 

The event will feature an extensive program of 70+ diverse business and technical sessions that cover cutting-edge topics across five presentation tracks: Industry 4.0; Security; The New Networking Stack; Operational Deployments (case studies, success & challenges); and Emerging Technologies and Business Models. 

Conference Session Highlights:

ONE Summit returns in-person for the first time in two years in its best format ever! The use-case driven content is strong in breadth and depth and includes sessions from open source users with whom LF Networking is engaged for the first time. Attendees will have a choose your own adventure experience as they select from a variety of content formats from interactive sessions, panels, in-depth tutorials, to lightning talk sessions with quick glances of future- looking thought processes. 

  • Real-world deployment stories of open source in action, from:
    • leading telco and enterprise organizations including TELUS, Google,  Deutsche Telekom, Red Hat, Verizon, Nokia, China Mobile, Equinix, Netgate, Pantheon and others. 
    • government and academic institutions including DARPA, the Naval Information Warfare Center (NWIC), UK Government, University of Southern California, Jeju National University, Georgia Tech, and others. 
  • Use case examples across the Metaverse, Robotics, Smart Home, Digital Twins, 5G Automation, Edge Orchestration, AI/ML, Kubernetes Orchestration, and more. 
  • Hands-on experiential learning and technical deep-dives in IoT and edge deployments led by expert practitioners.
  • Lightning talks offer the opportunity to quickly learn about security and emerging technologies.
  • Sessions contributing insight into open source projects across the ecosystem, including Akraino, CAMARA, eBPF, EdgeX Foundry, EVE, Nephio, OAI, OIF, ONAP, OpenSSF, ORAN-SC, SONiC, and more.


ONE Summit attendees engage directly with thought leaders across 5G, Cloud Native and Network Edge and expand knowledge of open source networking technology progression. Register today to gain fresh insights on technical and business collaboration shaping the future of networking, edge, and cloud computing.

Corporate registration is offered at the early price of US$995 through Sept. 9. Day passes are available for US$675 and Individual/Hobbyist (US$350) and  Academic/Student (US$100) passes are also available. Members of The Linux Foundation, LF Networking, and  LF Edge receive a 20 percent discount off registration and can contact to request a member discount code. Members of the press who would like to request a press pass to attend should contact

To register, visit Corporate attendees should register before September 9, 2022 for the best rates. 

Developer & Testing Forum

ONE Summit will be followed by a complimentary, two-day LF Networking Developer and Testing Forum (DTF), a grassroots hands-on event organized by the LF Networking projects. ONE Summit attendees are encouraged to extend the experience, roll up sleeves, and join the incredible developer community to advance the open source networking and automation technologies of the future. Session videos from the Spring 2022 LFN Developer & Testing Forum, which took place June 13-16 in Porto, Portugal, are available here.


ONE Summit  is made possible thanks to generous sponsors, including: Diamond sponsor Dell Technologies; Gold sponsor kyndryl; Silver sponsor Futurewei Technologies; and Bronze sponsors Data Bank and 

For information on becoming an event sponsor, click here or email for more information and to speak to the team.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. Learn more at

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds. ###

The post The Network Evolves: ONE Summit Presents Collaborative and Transformative Program Across Networking, Edge, IoT appeared first on Linux Foundation.

Open 3D Foundation (O3DF) Announces Keynote Lineup for O3DCon—Online and In-Person in Austin, October 17-19

Wed, 08/31/2022 - 06:14

Keynotes, workshops and sessions will explore innovations in open source 3D development and use of Open 3D Engine (O3DE) for gaming, entertainment, metaverse, AI/ML, healthcare applications and more

SAN FRANCISCO—August 30, 2022—The Open 3D Foundation (O3DF) today announced a slate of keynote speakers for O3DCon, its flagship conference, which will be held October 17-19 in Austin, Texas and online. O3DCon will bring together technology leaders, indie developers and academia to share ideas and best practices, discuss hot topics and foster the future of 3D development across a variety of industries and disciplines. The schedule is available at

Industry luminaries will headline the keynote sessions, including:

  • Bill Vass, vice president of engineering, Amazon Web Services
  • Bryce Adelstein Lelbach, principal architect, NVIDIA and standard C++ Library Evolution chair, “C++ Horizons”
  • Deb Nicholson, executive director, Python Software Foundation and founding board member, SeaGL (the Seattle GNU/Linux Conference), “Open Source is a Multiplier”
  • Denis Dyack, founder, Apocalypse Studios, “The Successes, Challenges and Future of O3DE”
  • Mathew Kemp, game director, Hadean, “Supercharging Gameworld Performance Using the Cloud”
  • Nithya Ruff, head, Open Source Program Office, Amazon and chair, Linux Foundation Board of Directors, “Game On! How to Be a Good Open Source Citizen” 
  • Omar Zohdi, technical ecosystem manager, Imagination Technologies, “O3DE and the Future of Mobile Graphics Development”
  • Royal O’Brien, executive director, Open 3D Foundation and general manager of Digital Media & Games, Linux Foundation, “State of the Open 3D Foundation”
  • Sheri Graner Ray, CEO and founder, Zombie Cat Studios, “How Big Is Your Dream? Rethinking the Role of Passion in Development”
  • Stephen Jacobs, director of Open@RIT and professor at the School of Interactive Games and Media, Rochester Institute of Technology, “Open in Academia, Science and Why O3DE Should Be Part of It All”

Early Bird Registration Ends September 16
Register today at Organizations interested in sponsorships can contact

“After celebrating our first year in July and recognizing the immense growth of our community, we’re excited to connect with them at this year’s O3DCon,” said Royal O’Brien, executive director of O3DF. “Since O3DF’s inception, we’ve grown to 25 member companies, including Epic Games, LightSpeed Studios and Microsoft, and we’ve announced a new O3DE release. This year’s O3DCon will feature a diversity of use cases that go way beyond gaming, including metaverse, cloud, open source licensing, digital twin in healthcare and lots more. If your organization is building 3D stacks for a new generation of applications, O3DCon is an event designed to help you get there.”

The three-day O3DCon conference schedule will also include sessions, lightning talks, panel discussions and exhibits exploring innovations and best practices in open 3D development, open source licensing, interoperability across 3D engines and the benefits of using O3DE to revolutionize real-time 3D development. Sessions of note include:

Attendees can also participate in a slate of hands-on workshops and training sessions on the first day of the conference, October 17.

About the Open 3D Engine (O3DE) Project
O3DE is the flagship project managed by the O3DF. The open source project is a modular, cross-platform 3D engine built to power anything from AAA games to cinema-quality 3D worlds to high-fidelity simulations. The code is hosted on GitHub under the Apache 2.0 license. The O3D Engine community is very active, averaging up to 2 million line changes and 350-450 commits monthly from 60-100 authors across 41 repos. To learn more, please visit and get involved and connect with the community on and

About the Open 3D Foundation (O3DF)
Established in July 2021, the mission of the O3DF is to make an open source, fully-featured, high-fidelity, real-time 3D engine for building games and simulations, available to every industry. The O3DF is home to the O3DE project. To learn more, please visit

About the Linux Foundation
Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. 

For more information, please visit us at

Media Inquiries:

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.

The post Open 3D Foundation (O3DF) Announces Keynote Lineup for O3DCon—Online and In-Person in Austin, October 17-19 appeared first on Linux Foundation.

LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital Twins

Tue, 08/30/2022 - 23:11

This post originally appeared on the LF Pubic Health’s blog. The author, Jim St. Clair, is the Executive Director. With the Digital Twin Consortium, Academic Medical Centers and other LF projects, Linux Foundation Public Health addresses open software for next generation modeling

Among the many challenges in our global healthcare delivery landscape, digital health plays an increasingly important role on almost a daily basis, from personal medical devices, to wearables, to new clinical technology and data exchanges. Beyond direct patient care, digital health also applies to diagnostics, drug effectiveness, and treatment delivery. These use cases are being driven by rapid growth in data modeling, artificial intelligence (AI)/machine learning (ML), and data visualization. Given the rapid digitalization of healthcare delivery, emerging digital twin technology is considered the next system that will advance further efforts in medical discoveries and improve clinical and public health outcomes.

What is a Digital Twin?

Put simply, a digital twin is a digital replica or “twin” of a physical object, process, or service. It is a virtual model (a compilation of data plus algorithms) that can dynamically pair the physical and digital worlds. The ultimate goal for digital twins, such as in manufacturing, is to iteratively model, test, and optimize a physical object in the virtual space until that model meets expected performance, at which point it is then ready to be built or enhanced (if already built) in the physical world. To create a pairing between the digital world and the real world, a digital twin leverages real time data, such as smart sensor technology, coupled with analytics, and often artificial intelligence (AI) in order to detect and prevent system failures, improve system performance, and explore innovative uses or functional models.

As mentioned, developments in smart sensor technologies and wireless networks have pushed forward the applications of the Internet of Things (IoT), and contributed to the practical applications of digital twin technology. Thanks to IoT, cloud computing and real time analytics, digital twins can now be created to collect much more real-world and real-time data from a wide range of sources, and thus can establish and maintain more comprehensive simulations of physical entities, their functionality, and changes they undergo over time.

Digital Twins in Healthcare

While the application of digital twins in healthcare is still very new, there are three general categories for their use: digital twins of a patient/person or a body system; digital twins of an organ or a smaller unit; and digital twins of an organization.

Digital twins can simulate the whole human body, as well as a particular body system or body function (e.g., the digestive system). One example of this kind of patient-sized digital twin is the University of Miami’s MLBox system, designed for the measurement of a patient’s “biological, clinical, behavioral and environmental data” to design personalized treatments for sleep issues.

Digital twins can also simulate one body organ, part of an organ or system, like the heart, and can even model subcellular (organelle/sub-organelle) functions or functions at the molecular level of interest within a cell. Dassault Systèmes’ Living Heart Project is an example of this kind of digital twin, which is designed to simulate the human heart’s reaction to implantation of cardiovascular devices.

Additionally, healthcare institutions (e.g., a hospital) can have their corresponding digital twins, such as Singapore General Hospital. This kind of simulation can be useful when determining environmental risks within institutions, such as the risks of infectious disease transmission.

The “Heart” of Health Digital Twins is Open Source – and the LF

While digital twins represent a complex and sophisticated new digital model, the building blocks of this technology—like all other software foundations—are best supported by an open-source development and governance model. The Linux Foundation sustains the nexus of open source development that underpins digital twin technology:

  • Linux Foundation Public Health (LFPH) is dedicated to advancing open source software development for digital health applications across the globe. Together with its members, LFPH is developing projects that address public health data infrastructure, improving health equity, advancing cybersecurity, and building multi-stakeholder collaboration for patient engagement and health information exchange.
  • The LF AI and Data Foundation is working to build and support an open artificial intelligence (AI) and data community, and drive open source innovation in the AI and data domains by enabling collaboration and the creation of new opportunities for all the members of the community.
  • LF Edge aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. By bringing together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.
  • The Open 3D Foundation includes many collaborators working to make an open source, fully-featured, high-fidelity, realtime 3D engine for building games and simulations, such as digital twins, available to every industry. “The Open 3D Foundation, along with its partners and community is helping advance 3D digital twin technology by proving an open source implementation that is completely dynamic with no need to preload the media.” said General manager Royal O’Brien, “This can ensure the smallest customizable footprint possible for any device platform to meet any industry needs.”

Additionally, LFPH has established a joint membership with the Digital Twin Consortium, focused on healthcare and life sciences. “Artificial Intelligence (AI), edge computing and digital twins represent the next generation in data transformation and patient engagement,” said Jim St. Clair, Executive Director, “Developing a collaborative relationship with the Digital Twin Consortium will greatly advance the joint efforts of model development and supporting open source components to advance adoption in healthcare through multi-stakeholder collaboration.” LFPH looks forward to supporting, innovating, and driving forward an open-source vision in the critical and growing area of digital twins for healthcare.

The post LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital Twins appeared first on Linux Foundation.

Elevate Your Organization’s Open Source Strategy

Wed, 08/24/2022 - 00:58

The role of software, specifically open source software, is more influential than ever and drives today’s innovation. Maintaining and growing future innovation depends on the open source community. Enterprises that understand this are driving transformation and rising to the challenges by boosting their collaboration across industries, understanding how to support their open source developers, and contributing to the open source community.

They realize that success depends on a cohesive, dedicated, and passionate open source community, from hundreds to thousands of individuals. Their collaboration is key to achieving the project’s goals.   It can be challenging to manage all aspects of an open source project considering all the different parts that drive it. For example:

  • Project’s scope and goals
  • Participating members, maintainers, and collaborators
  • Management and governance
  • Legal guidelines and procedures
  • IT services 
  • Source control, CI/CD, distribution, and cloud providers
  • Communication channels and social media

The Linux Foundation’s LFX provides various tools to help open source communities design and adopt a successful project strategy considering all moving parts. So how do they do it? Let’s explore that using the Hyperledger project as an example. 

1. Understand your project’s participation

Through the LFX Individual Dashboard, participants can register the identity they are using to contribute their code to GitHub and Gerrit (Since the Hyperledger project uses both). Then, the tool uses that identity to connect users’ contributions, affiliations, memberships, training, certifications, earned badges, and general information. 

With this information, other LFX tools gather and propagate data charts to help the community visualize their participation in GitHub and Gerrit for the different Hyperledger repositories. It also displays detailed contribution metrics, code participation, and issue participation.  

The LFX Organization Dashboard is a convenient tool to help managers and organizations manage their project memberships, discover similar projects to join, and understand the team’s engagement in the community. In detail, it provides information on:

  • Code contributions
  • Committee members
  • Event speakers and attendees 
  • Training and certification
  • Project enrollments

It is vital to have the project’s members and participant identities organized to understand better how their work makes a difference in the project and how their participation interacts with others toward the project’s goals.  

2. Manage your project’s processes

LFX Project Control Center offers a one-stop portal for program managers to organize their project participation, IT services, and quick access to other LFX tools.

Project managers can also connect:

  • Their project’s source control
  • Issue tracking tool
  • Distribution service
  • Cloud provider
  • Mail lists
  • Meeting management
  • Wiki and hosted domains 

For example, Hyperledger can view all related organizations under their Hyperledger Foundation umbrella, analyze each participant project, and connect services like GitHub, Jira, Confluence, and their communication channels like and Twitter accounts.

Managing all the project’s aspects in one place makes it easier for managers to visualize their project scope and better understand how all their services impact the project’s performance.

3. Reach outside and get your project in the spotlight

Social and earned media are vital to ensure your project reaches the ears of its consumers. In addition, it is essential to have good visibility into your project’s influence in the Open Source world and where it is making the best impact.

LFX’s Insights Social Media Metrics provides high-level metrics on a project’s social media account like:

  • Twitter followers and following information 
  • Tweets and retweet breakdown
  • Trending tweets
  • Hashtag breakdown 
  • Contributor and user mentions

In the case of Hyperledger, we have an overall view of their tweet and retweet breakdown. In addition, we can also see how tweets by Bitcoin News are making an impression on the interested communities. 

Insights help you analyze how your project impacts other regions, reaches diverse audiences by language, and adjust communication and marketing strategies to reach out to the sources that open source participants rely on to get the latest information on how the community contributes and engages with others. For example, tweets written in English, Japanese, and Spanish made by Hyperledger contributors are visible in an overall languages chart with direct and indirect impressions calculated.

The bottom line

A coherent open source project strategy is a crucial driver of how enterprises manage their open source programs across their organization and industry. LFX is one of the tools that make enterprise open source programs successful. It is an exclusive benefit for Linux Foundation members and projects. If your organization and project would like to join us, learn more about membership or hosting your project.

The post Elevate Your Organization’s Open Source Strategy appeared first on Linux Foundation.

Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course

Fri, 08/19/2022 - 01:29

The original article appeared on the OpenSSF blog. The author, Harimohan Rajamohanan, is a Solution Architect and Full Stack Developer with Wipro Limited. Learn more about the Linux Foundation’s Developing Secure Software (LFD121) course

All software is under continuous attack today, so software architects and developers should focus on practical steps to improve information security. There are plenty of materials available online that talk about various aspects of secure development practices, but they are scattered across various articles and books. Recently, I had come across a course developed by the Open Source Security Foundation (OpenSSF), which is a part of the Linux Foundation, that is geared towards software developers, DevOps professionals, web application developers and others interested in learning the best practices of secure software development. My learning experience taking the DEVELOPING SECURE SOFTWARE (LFD121) course was positive, and I immediately started applying these learnings in my work as a software architect and developer.

“A useful trick for creating secure systems is to think like an attacker before you write the code or make a change to the code” – DEVELOPING SECURE SOFTWARE (LFD121)

My earlier understanding about software security was primarily focused on the authentication and the authorization of users. In this context the secure coding practices I was following were limited to:

  • No unauthorized read
  • No unauthorized modification
  • Ability to prove someone did something
  • Auditing and logging

It may not be broad enough to assume a software is secure if a strong authentication and authorization mechanism is present. Almost all application development today depends on open source software and it is important that developers verify the security of the open source chain of contributors and its dependencies. Recent vulnerability disclosures and supply chain attacks were an eye opener for me about the existing potential of vulnerabilities in open source software. The natural focus of majority of developers is to get the business logic working and deliver the code without any functional bugs.

The course gave me a comprehensive outlook on the secure development practices one should follow to defend from the kind of attacks that happen in modern day software.

What does risk management really mean?

The course has detailed practical advice on considering security as part of the requirements of a system. Being part of various global system integrators for over a decade, I was tasked to develop application software for my customers. The functional requirements were typically written down in such projects but covered only a few aspects of security in terms of user authentication and authorization. Documenting the security requirement in detail will help developers and future maintainers of the software to have an idea of what the system is trying to accomplish for security.

Key takeaways on risk assessment:
  • Analyze security basics including risk management, the “CIA” triad, and requirements
  • Apply secure design principles such as least privilege, complete mediation, and input validation
  • Supply chain evaluation tips on how to reuse software with security in mind, including selecting, downloading, installing, and updating such software
  • Document the high-level security requirements in one place
Secure design principles while designing a software solution

Design principles are guides based on experience and practice. The software will generally be secure if you apply the secure design principles. This course covers a broad spectrum of design principles in terms of the components you trust and the components you do not trust. The key principles I learned from the course that guide me in my present-day software design areas are:

  • The user and program should operate using the least privilege. This limits the damage from error or attack.
  • Every data access or manipulation attempt should be verified and authorized using a mechanism that cannot be bypassed.
  • Access to systems should be based on more than one condition. How do you prove the identity of the authenticated user is who they claimed to be? Software should support two-factor authentication.
  • The user interface should be designed for ease of use to make sure users routinely and automatically use the protection mechanisms correctly.
  • Importance of understanding what kind of attackers you expect to counter.
A few examples on how I applied the secure design principles in my solution designs:
  • The solutions I build often use a database. I have used the SQL GRANT command to limit the privilege the program gets. In particular, the DELETE privilege is not given to any program. And I have implemented a soft delete mechanism in the program that sets the column “active = false” in the table for delete use cases.
  • The recent software designs I have been doing are based on microservice architecture where there is a clear separation between the GUI and backend services. Each part of the overall solution is authenticated separately. This may minimize the attack surface.
  • Client-side input validation is limited to counter accidental mistakes. But the actual input validation happens at the server side. The API end points validates all the inputs thoroughly before processing it. For instance, a PUT API not just validates the resource modification inputs, but also makes sure that the resource is present in the database before proceeding with the update.
  • Updates are allowed only if the user consuming the API is authorized to do it.
  • Databases are not directly accessible for use by a client application.
  • All the secrets like cryptographic keys and passwords are maintained outside the program in a secure vault. This is mainly to avoid secrets in source code going into version control systems.
  • I have started to look for OpenSSF Best Practices Badge while selecting open source software and libraries in my programs. I also look for the security posture of open source software by checking the OpenSSF scorecards score.
  • Another practice I follow while using open source software is to check whether the software is maintained. Are there recent releases or announcements from the community?
Secure coding practices

In my opinion, this course covers almost all aspects of secure coding practices that a developer should focus on. The key focus areas include:

  1. Input validations
  2. How to validate numbers
  3. Key issues with text, including Unicode and locales
  4. Usage of regular expression to validate text input
  5. Importance of minimizing the attack surfaces
  6. Secure defaults and secure startup.

For example, apply API input validation on IDs to make sure that records belonging to those IDs exists in the database. This reduces the attack surface. Also make sure first that the object in the object modify request exists in the database.

  • Process data securely
  • Importance of treating untrusted data as dangerous
  • Avoid default and hardcoded credentials
  • Understand the memory safety problems such as out-of-bounds reads or writes, double-free, and use-after-free
  • Avoid undefined behavior
  • Call out to other programs
  • Securely call other programs
  • How to counter injection attacks such as SQL injection and OS command injection
  • Securely handle file names and file paths
  • Send output
  • Securely send output
  • How to counter Cross-Site scripting (XSS) attacks
  • Use HTTP hardening headers including Content Security Policy (CSP)
  • Prevent common output related vulnerability in web applications
  • How to securely format strings and templates.

“Security is a process – a journey – and not a simple endpoint” – DEVELOPING SECURE SOFTWARE (LFD121)

This course gives a practical guidance approach for you to develop secure software while considering security requirement, secure design principles, counter common implementation mistakes, tools to detect problems before you ship the code, promptly handle vulnerability reports. I strongly recommend this course and the certification to all developers out there.

About the author

Harimohan Rajamohanan is a Solution Architect and Full Stack Developer, Open Source Program Office, Lab45, Wipro Limited. He is an open source software enthusiast and worked in areas such as application modernization, digital transformation, and cloud native computing. Major focus areas are software supply chain security and observability.

The post Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course appeared first on Linux Foundation.

Boeing joins the ELISA Project as a Premier Member to Strengthen its Commitment to Safety-Critical Applications

Thu, 08/11/2022 - 21:00
Boeing to lead New Aerospace Working Group

SAN FRANCISCO – August 11, 2022 –  Today, the ELISA (Enabling Linux in Safety Applications) Project announced that Boeing has joined as a Premier member, marking its commitment to Linux and its effective use in safety critical applications. Hosted by the Linux Foundation, ELISA is an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems.

“Boeing is modernizing software to accelerate innovation and provide greater value to our customers,” said Jinnah Hosein, Vice President of Software Engineering at the Boeing Company. “The demand for safe and secure software requires rapid iteration, integration, and validation. Standardizing around open source products enhanced for safety-critical avionics applications is a key aspect of our adoption of state-of-the-art techniques and processes.”

As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products, and space systems for customers in more than 150 countries. It’s already using Linux in current avionics systems, including commercial systems certified to DO-178C Design Assurance Level D. Joining the ELISA Project will help pursue the vision for generational change in software development at Boeing. Additionally, Boeing will work with the ELISA Technical Steering Committee (TSC) to launch a new Aerospace Working Group that will work in parallel with the other working groups like automotive, medical devices, and others.

“We want to improve industry-standard tools related to certification and assurance artifacts in order to standardize improvements and contribute new features back to the open source community. We hope to leverage open source tooling (such as a cloud-based DevSecOps software factory) and industry standards to build world class software and provide an environment that attracts industry leaders to drive cultural change at Boeing,” said Hosein.

Linux is used in all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes. Launched in February 2019, ELISA works with Linux kernel and safety communities to agree on what should be considered when Linux is used in safety-critical systems. The project has several dedicated working groups that focus on providing resources for system integrators to apply and use to analyze qualitatively and quantitatively on their systems.

“Linux has a history of being a reliable and stable development platform that advances innovation for a wide range of industries,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “With Boeing’s membership, ELISA will start a new focus in the aerospace industry, which is already using Linux in selected applications. We look forward to working with Boeing and others in the aerospace sector, to build up best practices for working with Linux in this space.”

Other ELISA Project members include ADIT, AISIN AW CO., Arm, Automotive Grade Linux, Automotive Intelligence and Control of China, Banma, BMW Car IT GmbH, Codethink, Elektrobit, Horizon Robotics, Huawei Technologies, Intel, Lotus Cars, Toyota, Kuka, Linuxtronix. Mentor, NVIDIA, SUSE, Suzuki, Wind River, OTH Regensburg, Toyota and ZTE.

Upcoming ELISA Events

The ELISA Project has several upcoming events for the community to learn more or to get involved including:

  • ELISA Summit – Hosted virtually for participants around the world on September 7-8, this event will feature overview of the project, the mission and goals for each working group and an opportunity for attendees to ask questions and network with ELISA leaders. The schedule is now live and includes speakers from Aptiv Services Deutschland GmbH, Boeing, CodeThink, The Linux Foundation, Mobileye, Red Hat and Robert Bosch GmbH. Check out the schedule here: Registration is free and open to the public.
  • ELISA Forum – Hosted in-person in Dublin, Ireland, on September 12, this event takes place the day before Open Source Summit Europe begins. It will feature an update on all of the working groups, an interactive System-Theoretic Process Analysis (STPA) use case and an Ask Me Anything session.  Pre-registration is required. To register for ELISA Forum, add it to your Open Source Summit Europe registration.
  • Open Source Summit Europe – Hosted in-person in Dublin and virtually on September 13-16, ELISA will have two dedicated presentations about enabling safety in safety-critical applications and safety and open source software. Learn more.

For more information about ELISA, visit

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds.


The post Boeing joins the ELISA Project as a Premier Member to Strengthen its Commitment to Safety-Critical Applications appeared first on Linux Foundation.

Adopting Sigstore Incrementally

Thu, 08/11/2022 - 02:21

This post is authored by Hayden Blauzvern and originally appeared on Sigstore’s blog. Sigstore is a new standard for signing, verifying, and protecting software. It is a project of the Linux Foundation. 

Developers, package maintainers, and enterprises that would like to adopt Sigstore may already sign published artifacts. Signers may have existing procedures to securely store and use signing keys. Sigstore can be used to sign artifacts with existing self-managed, long-lived signing keys. Sigstore provides a simple user experience for signing, verification, and generating structured signature metadata for artifacts and container signatures. Sigstore also offers a community-operated, free-to-use transparency log for auditing signature generation.

Sigstore additionally has the ability to use code signing certificates with short-lived signing keys bound to OpenID Connect identities. This signing approach offers simplicity due to the lack of key management; however, this may be too drastic of a change for enterprises that have existing infrastructure for signing. This blog post outlines strategies to ease adoption of Sigstore while still using existing signing approaches.

Signing with self-managed, long-lived keys

Developers that maintain their own signing keys but want to migrate to Sigstore can first switch to using Cosign to generate a signature over an artifact. Cosign supports importing an existing RSA, ECDSA, or ED25519 PEM-encoded PKCS#1 or PKCS#8 key with cosign import-key-pair –key key.pem, and can sign and verify with cosign sign-blob –key cosign.key artifact-path and cosign verify-blob –key artifact-path.

  • Developers can get accustomed to Sigstore tooling to sign and verify artifacts.
  • Sigstore tooling can be integrated into CI/CD pipelines.
  • For signing containers, signature metadata is published with the OCI image in an OCI registry.
Signing with self-managed keys with auditability

While maintaining their own signing keys, developers can increase auditability of signing events by publishing signatures to the Sigstore transparency log, Rekor. This allows developers to audit when signatures are generated for artifacts they maintain, and also monitor when their signing key is used to create a signature.

Developers can upload a signature to the transparency log during signing with COSIGN_EXPERIMENTAL=1 cosign sign-blob –key cosign.key artifact-path. If developers would like to use their own signing infrastructure while still publishing to a transparency log, developers can use the Rekor CLI or API. To upload an artifact and cryptographically verify its inclusion in the log using the Rekor CLI:

rekor-cli upload --rekor_server \ --signature <artifact_signature> \ --public-key <your_public_key> \ --artifact <url_to_artifact|local_path> rekor-cli verify --rekor_server \ --signature <artifact-signature> \ --public-key <your_public_key> \ --artifact <url_to_artifact|local_path>

In addition to PEM-encoded certificates and public keys, Sigstore supports uploading many different key formats, including PGP, Minisign, SSH, PKCS#7, and TUF. When uploading using the Rekor CLI, specify the –pki-format flag. For example, to upload an artifact signed with a PGP key:

gpg --armor -u --output signature.asc --detach-sig package.tar.gz gpg --export --armor "" > public.key rekor-cli upload --rekor_server \ --signature signature.asc \ --public-key public.key \ --pki-format=pgp \ --artifact package.tar.gz Benefits
  • Developers begin to publish signing events for auditability.
  • Artifact consumers can create a verification policy that requires a signature be published to a transparency log.
Self-managed keys in identity-based code signing certificate with auditability

When requesting a code signing certificate from the Sigstore certificate authority Fulcio, Fulcio binds an OpenID Connect identity to a key, allowing for a verification policy based on identity rather than a key. Developers can request a code signing certificate from Fulcio with a self-managed long-lived key, sign an artifact with Cosign, and upload the artifact signature to the transparency log.

However, artifact consumers can still fail-open with verification (allow the artifact, while logging the failure) if they do not want to take a hard dependency on Sigstore (require that Sigstore services be used for signature generation). A developer can use their self-managed key to generate a signature. A verifier can simply extract the verification key from the certificate without verification of the certificate’s signature. (Note that verification can occur offline, since inclusion in a transparency log can be verified using a persisted signed bundle from Rekor and code signing certificates can be verified with the CA root certificate. See Cosign’s verification code for an example of verifying the Rekor bundle.)

Once a consumer takes a hard dependency on Sigstore, a CI/CD pipeline can move to fail-closed (forbid the artifact if verification fails).

  • A stronger verification policy that enforces both the presence of the signature in a transparency log and the identity of the signer.
  • Verification policies can be enforced fail-closed.
Identity-based (“keyless”) signing

This final step is added for completeness. Signing is done using code signing certificates, and signatures must be published to a transparency log for verification. With identity-based signing, fail-closed is the only option, since Sigstore services must be online to retrieve code signing certificates and append entries to the transparency log. Developers will no longer need to maintain signing keys.


The Sigstore tooling and infrastructure can be used as a whole or modularly. Each separate integration can help to improve the security of artifact distribution while allowing for incremental updates and verifying each step of the integration.

The post Adopting Sigstore Incrementally appeared first on Linux Foundation.

Delta 2.0 – The Foundation of your Data Lakehouse is Open

Wed, 08/10/2022 - 03:52

This post originally appeared on the Delta Lake blog

We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quite nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

Michael Armbrust’s Day 1 keynote during Data + AI Summit 2022

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation and this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set = true;

An important thing to remember is once you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGE, UPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.

See the documentation for more details.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping and logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:


See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic; INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Experimental support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other notable changes
    • Improve the generated column data skipping by adding the support for skipping by nested column generated column
    • Improve the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.
Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.


Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

The post Delta 2.0 – The Foundation of your Data Lakehouse is Open appeared first on Linux Foundation.

The Linux Foundation Announces Keynote Speakers for Open Source Summit Europe 2022

Thu, 08/04/2022 - 22:53

Global visionaries headline the premier open source event in Europe to share on OSS adoption in Europe, driving the circular economy, finding inspiration through the pandemic, supply chain security and more.

SAN FRANCISCO, August 4, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open Source Summit Europe, taking place September 13-16 in Dublin, Ireland. The event is being produced in a hybrid format, with both in-person and virtual participation available, and is co-located with the Hyperledger Global Forum, OpenSSF Day, Linux Kernel Maintainer Summit, KVM Forum, and Linux Security Summit, among others.

Open Source Summit Europe is the leading conference for developers, sys admins and community leaders – to gather to collaborate, share information, gain insights, solve technical problems and further innovation. It is a conference umbrella, composed of 13 events covering the most important technologies and issues in open source including LinuxCon, Embedded Linux Conference, OSPOCon, SupplyChainSecurityCon, CloudOpen, Open AI + Data Forum, and more. Over 2,000 are expected to attend.

2022 Keynote Speakers Include:

  • Hilary Carter, Vice President of Research, The Linux Foundation
  • Bryan Che, Chief Strategy Officer, Huawei; Cloud Native Computing Foundation Governing Board Member & Open 3D Foundation Governing Board Member
  • Demetris Cheatham, Senior Director, Diversity, Inclusion & Belonging Strategy, GitHub
  • Gabriele Columbro, Executive Director, Fintech Open Source Foundation (FINOS)
  • Dirk Hohndel, Chief Open Source Officer, Cardano Foundation
  • ​​Ross Mauri, General Manager, IBM LinuxONE
  • Dušan Milovanović, Health Intelligence Architect, World Health Organization
  • Mark Pollock, Explorer, Founder & Collaborator
  • Christopher “CRob” Robinson, Director of Security Communications, Product Assurance and Security, Intel Corporation
  • Emilio Salvador, Head of Standards, Open Source Program Office, Google
  • Robin Teigland, Professor of Strategy, Management of Digitalization, in the Entrepreneurship and Strategy Division, Chalmers University of Technology; Director, Ocean Data Factory Sweden and Founder, Peniche Ocean Watch Initiative (POW)
  • Linus Torvalds, Creator of Linux and Git
  • Jim Zemlin, Executive Director, The Linux Foundation

Additional keynote speakers will be announced soon. 

Registration (in-person) is offered at the price of US$1,000 through August 23. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact to request a member discount code. 

Health and Safety
In-person attendees will be required to show proof of COVID-19 vaccination or provide a negative COVID-19 test to attend, and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei, Intel and OpenEuler, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, NGINX, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds. 


Media Contact
Kristin O’Connell
The Linux Foundation

The post The Linux Foundation Announces Keynote Speakers for Open Source Summit Europe 2022 appeared first on Linux Foundation.

The American Association of Insurance Services & The Linux Foundation Welcome Jefferson Braswell as openIDL Project Executive Director

Wed, 08/03/2022 - 20:00

LISLE, IL., August 3, 2022 — The American Association of Insurance Services (AAIS) and the Linux Foundation welcome Jefferson Braswell as the new Executive Director of the openIDL Project.

“AAIS is excited about the expansion of openIDL in the insurance space and the addition of Jefferson as Executive Director signals even more strength and momentum to the fast-developing project,” said Ed Kelly, AAIS Executive Director. “We are happy to continue to work with the Linux Foundation to help affect meaningful, positive change for the insurance ecosystem.”

“openIDL is a Linux Foundation Open Governance Network and the first of its kind in the insurance industry,” said Daniela Barbosa, General Manager of Blockchain, Healthcare and Identity at the Linux Foundation. “It leverages open source code and community governance for objective transparency and accountability among participants with strong executive leadership helping shepherd this type of open governance networks. Jeff Braswell’s background and experience in financial standards initiatives and consortium building aligns very well with openIDL’s next growth and expansion period.“

Braswell has been successfully providing leading-edge business solutions for information-intensive enterprises for over 30 years. As a founding Director, he recently completed a 6-year term on the Board of the Global Legal Entity Identifier Foundation (GLEIF), where he chaired the Technology, Operations and Standards Committee. He is also the Chair of the Algorithmic Contract Types Unified Standards Foundation (ACTUS), and he has actively participated in international financial data standards initiatives.

Previously, as Co-Founder and President of Berkeley-based Risk Management Technologies (RMT), Braswell designed and led the successful implementation of advanced, firm-wide risk management solutions integrated with enterprise-wide data management tools. They were used by  many of the world’s largest financial institutions, including Wells Fargo, Credit Suisse, Chase, PNC, Sumitomo Mitsui Banking Corporation, Mellon, Wachovia, Union Bank and ANZ.

“We appreciate the foundation that AAIS laid for openIDL, and I look forward to bringing my expertise and knowledge to progress this project forward,” shared Braswell. “Continuing the work with the Linux Foundation to positively impact insurance services through open-source technology is exciting and will surely change the industry for the better moving forward.” 

openIDL, an open source, distributed ledger platform, infuses efficiency, transparency and security into regulatory reporting. With openIDL, insurers fulfill requirements while retaining the privacy of their data. Regulators have the transparency and insights they need, when they need them. Initially developed by AAIS, expressly for its Members, openIDL is now being further advanced by the Linux Foundation as an open-source ecosystem for the entire insurance industry.

Established in 1936, AAIS serves the Property & Casualty insurance industry as the only national nonprofit advisory organization governed by its Member insurance carriers. AAIS delivers tailored advisory solutions including best-in-class policy forms, rating information and data management capabilities for commercial lines, inland marine, farm & agriculture and personal lines insurers. Its consultative approach, unrivaled customer service and modern technical capabilities underscore a focused commitment to the success of its members. AAIS also serves as the administrator of openIDL, the insurance industry’s regulatory blockchain, providing unbiased governance within existing insurance regulatory frameworks. For more information about AAIS, please visit


Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

openIDL (open Insurance Data Link) is an open blockchain network that streamlines regulatory reporting and provides new insights for insurers, while enhancing timeliness, accuracy, and value for regulators. openIDL is the first open blockchain platform that enables the efficient, secure, and permissioned-based collection and sharing of statistical data. For more information, please visit



John Greene
Director – Marketing & Communications

Linux Foundation

Dan Whiting
Director of Media Relations and Content

The post The American Association of Insurance Services & The Linux Foundation Welcome Jefferson Braswell as openIDL Project Executive Director appeared first on Linux Foundation.

Public-private partnerships in health: The journey ahead for open source

Wed, 08/03/2022 - 00:53

This original article appeared on the LF Public Health project’s blog.

The past three years have redefined the practice and management of public health on a global scale. What will we need in order to support innovation over the next three years?

In May 2022, ASTHO (Association of State and Territorial Health Officials) held a forward-looking panel at their TechXPO on public health innovation, with a specific focus on public-private partnerships. Jim St. Clair, the Executive Director of Linux Foundation Public Health, spoke alongside representatives from MITRE, Amazon Web Services, and the Washington State Department of Health.

Three concepts appeared and reappeared in the panel’s discussion: reimagining partnerships; sustainability and governance; and design for the future of public health. In this blog post, we dive into each of these critical concepts and what they mean for open-source communities.

Reimagining partnerships

The TechXPO panel opened with a discussion on partnerships for data modernization in public health, a trending topic at the TechXPO conference. Dr. Anderson (MITRE) noted that today’s public health projects demand “not just a ‘public-private’ partnership, but a ‘public-private-community-based partnership’.” As vaccine rollouts, digital applications, and environmental health interventions continue to be deployed at scale, the need for community involvement in public health will only increase.

However, community partnerships should not be viewed as just another “box to check” in public health. Rather, partnerships with communities are a transformative way to gain feedback while improving usability and effectiveness in public-health interventions. As an example, Dr. Anderson referenced the successful VCI (Vaccination Credential Initiative) project, mentioning “When states began to partner to provide data… and offered the chance for individuals to provide feedback… the more eyeballs on the data, the more accurate the data was.”

Cardea, an LFPH project that focuses on digital identity, has also benefited from public-private-community-based partnerships. Over the past two years, Cardea has run three community hackathons to test interoperability among other tools that use Cardea’s codebase. Trevor Butterworth, VP of Cardea’s parent company, Indicio, explained his thoughts on community involvement in open source: “The more people use an open source solution, the better the solution becomes through stress testing and innovation; the better it becomes, the more it will scale because more people will want to use it.” Cardea’s public and private-sector partnerships also include Indicio, SITA, and the Aruba Health Department, demonstrating the potential for diverse stakeholders to unite around public-health goals.

Community groups are also particularly well-positioned to drive innovation in public health: they are often attuned to pressing issues that might be otherwise missed by institutional stakeholders. One standout example is the Institute for Exceptional Care (IEC), a LFPH member organization focused on serving individuals with intellectual and developmental disabilities, “founded by health care professionals, many driven by personal experience with a disabled loved one.” IEC recently presented a webinar on surfacing intellectual and developmental disabilities in healthcare data: both the webinar and Q&A showcased the on-the-ground knowledge of this deeply involved, solution-oriented community.

Sustainability and governance

Sustainability is at the heart of every viable open source project, and must begin with a complete, consensus-driven strategy. As James Daniel (AWS) mentioned in the TechXPO panel, it is crucial to determine “exactly what a public health department wants to accomplish, [and] what their goals are” before a solution is put together. Defining these needs and goals is also essential for long-term sustainability and governance, as mentioned by Dr. Umair Shah (WADOH): “You don’t want a scenario where you start something and it stutters, gets interrupted and goes away. You could even make the argument that it’s better to not have started it in the first place.”

Questions of sustainability and project direction can often be answered by bringing private and public interests to the same table before the project starts. Together, these interests can determine how a potential open-source solution could be developed and used. As Jim St. Clair mentioned in the panel: “Ascertaining where there are shared interests and shared values is something that the private sector can help broker.” Even if a solution is ultimately not adopted, or a partnership never forms, a frank discussion of concerns and ideas among private- and public-sector stakeholders can help clarify the long-term capabilities and interests of all stakeholders involved.

Moreover, a transparent discussion of public health priorities, questions, and ideas among state governments, private enterprises, and nonprofits can help drive forward innovation and improvements even when there is no specific project at hand. To this end, LFPH hosts a public Slack channel as well as weekly Technical Advisory Council (TAC) meetings in which we host new project ideas and presentations. TAC discussions have included concepts for event-driven architecture for healthcare data, a public health data sharing mesh, and “digital twins” for informatics and research.

Design for the future of public health

Better partnerships, sustainability, and governance provide exciting prospects for what can be accomplished in open-source public health projects in the coming years. As Jim St. Clair (LFPH) mentioned in the TechXPO panel: “How do we then leverage these partnerships to ask ‘What else is there about disease investigative technology that we could consider? What other diseases, what other challenges have public health authorities always had?’” These challenges will not be tackled through closed source solutions—rather, the success of interoperable, open-source credentialing and exposure notifications systems during the pandemic has shown that open-source has the upper hand when creating scalable, successful, and international solutions.

Jim St. Clair is not only optimistic about tackling new challenges, but also about taking on established challenges that remain pressing: “Now that we’ve had a crisis that enabled these capabilities around contact tracing and notifications… [they] could be leveraged to expand into and improve upon all of these other traditional areas that are still burning concerns in public health.” For example, take one long-running challenge in United States healthcare: “Where do we begin… to help drive down the cost and improve performance and efficiency with Medicaid delivery? … What new strategies could we apply in population health that begin to address cost-effective care-delivery patient-centric models?”

Large-scale healthcare and public-health challenges such as mental health, communicable diseases, diabetes—and even reforming Medicaid—will only be accomplished by consistently bringing all stakeholders to the table, determining how to sustainably support projects, and providing transparent value to patients, populations and public sector agencies. LFPH has pursued a shared vision around leveraging open source to improve our communities, carrying forward the same resolve as the diverse groups that originally came together to create COVID-19 solutions. The open-source journey in public health is only beginning.

The post Public-private partnerships in health: The journey ahead for open source appeared first on Linux Foundation.

People of Open Source: Neville Spiteri, Wevr

Fri, 07/29/2022 - 23:41

This post originally appeared on the Academy Software Foundation’s (ASWF) blog. The ASWF works to increase the quality and quantity of contributions to the content creation industry’s open source software base. 

Tell us a bit about yourself – how did you get your start in visual effects and/or animation? What was your major in college?

I started experimenting with the BASIC programming language when I was 12 years old on a ZX81 Sinclair home computer, playing a game called “Lunar Lander” which ran on 1K of RAM, and took about 5 minutes to load from cassette tape.

I have a Bachelor’s degree in Cognitive Science and Computer Science.

My first job out of college was a Graphics Engineer at Wavefront Technologies, working on the precursor to Maya 1.0 3D animation system, still used today. Then I took a Digital Artist role at Digital Domain.

What is your current role?

Co-Founder / CEO at Wevr. I’m currently focused on Wevr Virtual Studio – a cloud platform we’re developing for interactive creators and teams to more easily build their projects on game engines.

What was the first film or show you ever worked on? What was your role?

First film credit: True Lies, Digital Artist.

What has been your favorite film or show to work on and why?

TheBlu 1.0 digital ocean platform. Why? We recently celebrated TheBlu 10 year anniversary. TheBlu franchise is still alive today. At the core of TheBlu was/is a creator platform enabling 3D interactive artists/developers around the world to co-create the 3D species and habitats in TheBlu. The app itself was a mostly decentralized peer-to-peer simulation that ran on distributed computers with fish swimming across the Internet. The core tenets of TheBlu 1.0 are still core to me and Wevr today, as we participate more and more in the evolving Metaverse.

How did you first learn about open source software?

Linux and Python were my best friends in 2000.

What do you like about open source software? What do you dislike?

Likes: Transparent, voluntary collaboration.

Dislikes: Nothing.

What is your vision for the Open Source community and the Academy Software Foundation?

Drive international awareness of the Foundation and OSS projects.

Where do you hope to see the Foundation in 5 years?

A global leader in best practices for real-time engine-based production through international training and education.

What do you like to do in your free time?

Read books, listen to podcasts, watch documentaries, meditation, swimming, and efoiling!

Follow Neville on Twitter and connect on LinkedIn.  

The post People of Open Source: Neville Spiteri, Wevr appeared first on Linux Foundation.