The Linux Foundation

Subscribe to The Linux Foundation feed The Linux Foundation
Decentralized innovation, built on trust.
Updated: 36 min 5 sec ago

Hendrick and Jarvis Talk Software Security

Tue, 07/12/2022 - 07:04

While open source software is ubiquitous and generally regarded as being secure, software development practices vary widely across projects regarding application development practices, protocols to respond to defects, or lack of standardized selection criteria to determine which software components are more likely to be secure. Consequently, software supply chains are vulnerable to attack, with implications and challenges for open source project communities. 

To help improve the state of software supply chain security, the Linux Foundation, the Open Source Security Foundation (OpenSSF), Snyk, the Eclipse Foundation, CNCF, and CI/CD Foundation conducted research and released the findings in the report, Addressing Cybersecurity Challenges in Open Source Software, during the 2022 Open Source Summit North America. 

At the Summit, Stephen Hendrick, LF’s Vice President of Research, and Matt Jarvis, Director of Developer Relations at Snyk, sat down with Alan Shimel of TechStrong TV to discuss the findings and next steps. Here are some key takeaways:

Alan: “ I think we’re always disappointed when we do the surveys that we find out, you know, beyond the lip service that gets paid to security, what actually is going on under the covers, and we’re always wishing for and hoping for more. That being said, I don’t want to be pessimistic. I am of the glass half full opinion that we are doing better and more security now than we probably ever have done.”

Stephen: “On the issue of, do organizations have an open source security policy. What we found was 49% said they had one, that’s good. 34% did not. And 17% said they don’t know.”

Matt: “In larger enterprises… you’ve got that kind of ingrained culture over a long time in terms of security and about how you consume software. . . the hardest problem in security isn’t really about technology at all. It’s always about people and culture. . . We’ve got two kinds of things happening in almost a perfect storm. At the same time, we’ve got this massive rise in supply chain attacks on open source, because, you know, it’s a victim of its own success. And attackers have realized it’s a lot easier to get into the supply chain than it is to find zero days in end user applications. So you’ve got that going on, where all of a sudden, folks are going, well, everything we do is based on open source, like, what do I do about security? And then, as Steve pointed out, you’ve got this, this ongoing, massive transformation of how we develop software, you know, this superfast high velocity.”

Stephen: “We asked. . . how do you intend to improve on the situation?. . . Top of the list was organizations are looking for more intelligent tools. . .  That was at 59%. . . Right behind that at 52% was a strong desire to understand and essentially codify best practices for how to do secure software development”

Matt: “Culture change is such a big part of how you make that transition from your kind of old school, security as gatekeeper kind of function, to this thing, where we put it to the developers, because the developers are the ones who, you know, you fix it at the developer eyeball before it’s got anywhere near production. That is the cheapest.”

Stephen: “You know, I did a report last year on SBOMs. And I gotta tell you that factors right into this. . . we did some stats in this survey on dependencies, you know, both direct and transitive, and found, really, sort of low levels of strong, strong security around organizations understanding the security posture of all these different dependencies and dependencies of dependencies. Really low numbers there. SBOMs would go so far in helping sort all that out.

“They’re going to give you knowledge about the metadata, it’s gonna give you usability, so you know that you’re licensed to use the stuff, and it’s going to know if it was good, if you trust that not only what you’re looking at for metadata is not falsified, but also understanding quite clearly, you know, what’s been fixed, what hasn’t been fixed from a vulnerability standpoint.”

Matt: “I think when people think about policies, they think, Oh, this needs to be like a 100 page document of some kind, you know, then it becomes overwhelming, but really a policy can be a one liner.”

Watch the full interview and read the transcript below.

Alan Shimel 0:00
This is Alan Shimel witih Tech Strong TV. We’re back here live in Austin streaming out at you from the Open Source Summit. We’re having a great time. This is our third day of coverage here, though technically, it’s only day two of the event. It’s a long story, but we’ll talk about it later. Let me introduce you to our next two guests. This is a conversation I was really looking forward to. To my left here is a gentleman who’s been on fixture on TV a few times with us and talked in person, great person. He’s the VP of Research for the Linux Foundation, Steven Hendrick. Steven, welcome.

Stephen Hendrick 0:42
Thanks, Alan.

Alan Shimel 0:43
And joining Stephen and I from our friends at Snyk. Matt, Javis. And Matt, if I’m not mistaken, your director of developer relations. Welcome.

Matt Jarvis 0:55
Thank you.

Alan Shimel 0:55
So Steven, and Matt presented, was it yesterday. Yeah, yesterday, on a new survey that you guys recently announced and revealed and report. Why don’t you if you don’t mind share with everybody

Stephen Hendrick 1:10
Sure, I’d love to. OpenSSF is a very large project inside of the Linux Foundation. Brian Behlendorf, yeah. And so at his request, we went out and did a survey into sort of what’s happening in the open source space as far as secure software development. So we put together a survey in March, we fielded that it in April, we wrote it up in May, and had it produced in June. And so it’s being released here at the event. I think that happened yesterday morning. We did it in partnership with Snyk. So that’s why we’ve been working together with the messaging on all this.

Stephen Hendrick 1:55
And it’s it was not a surprise from the standpoint of what the results were. But it wasn’t, I was a little disappointed in kind of where we are at this point, from the standpoint of the uptake of attention to security when it comes to open source. So anyway, so we’ve got information that talk a little bit about, you know, where we are, you know, help, sort of, to understand the context of the problem. And then they have information about what people are doing about it. And it’s, that’s more exciting in many respects, because good things are happening.

Alan Shimel 2:33
I agree. So first of all, look, I think we’re always disappointed when we do the surveys that we find out, you know, beyond the lip service that gets paid to security, what actually is going on under the covers, and we’re always wishing for and hoping for more. That being said, I don’t want to be pessimistic I am of the glass half full opinion that we are doing better and more security now than we probably ever have done. Yeah. Yeah, I agree. That being said, Before we dive into it, I just wanted to really just quickly, so OpenSSF is the website? Yeah. And I’m going to assume that the report is there for anyone who wants to download it. That’s right. Let’s take let’s say that up front for people at home following along, or whether it’s live while you’re watching this.

Stephen Hendrick 3:30
It’s on the Snyk site. It’s so on the Linux Foundation site and it’s on OpenSSF. So yeah, it’s everywhere.

Alan Shimel 3:37
I think we might have covered it via Snyk over on Security Boulevard.

Matt Jarvis 3:41
I think I did some I did some press interviews. Before flying out here. So, yeah, we may have.

Alan Shimel 3:46
It may very well be on our Securityf Boulevard site. But nevertheless, it’s out there for people. Yeah. Let’s dive in now, though, what was some of the findings? Steven?

Stephen Hendrick 3:55
Sure. Well, let’s see what we’ll start with this, this whole issue of do organizations have an open source security policy? And what we found was 49% said they had one, that’s good. That’s good. 34% did not. And 17% said they don’t know. Everybody uses open source – 98% of organizations use it so. So they don’t even know they don’t know if they have won or not. So if you take put aside the don’t knows at this point, you’ve got about a 60/40 split between use that don’t have have a policy and don’t have a policy.

Stephen Hendrick 4:37
I mean, if you look at little more deeply into that, what you find is that small companies are more likely to not have a policy and that’s not surprising, they have resource constrained so it’s harder for them to have CISOs and OSPOs and policies be it for either just software development or open source software development so I can understand challenges there. So, but the idea of when you, even if you look at company size, we still ended up with about 30% of large in very large organizations that don’t have a policy for open source software development.

Alan Shimel 5:16
So a couple of thoughts. First of all, I empathize with small SMB businesses. We are an SMB business, but in today’s day and age, and maybe it’s when you’re a hammer, everything looks like a nail, but in today’s day and age, how do you not have security policies?

Matt Jarvis 5:40
Yeah, I mean, I think that there’s, there’s a couple of different things at play there. I mean, you know, addressing open source security, you know, is, is more complex than than it seems, because it’s not just about the code itself, you’ve kind of got to understand how open sources is created, how projects are governed, because governance can have a big play into, you know, whether we’re looking at some of those recent things around the sort of protestsware movement, where we’ve seen maintainers kind of go rogue, you know, and this comes down to single maintainer governance projects. And you need to take those things like governance into account if you’re going to base your business on something.

Alan Shimel 6:24
Right, but you just said, and that’s a completely loaded question. I would bet, if I was a betting man, right, that at the large enterprise level, you’re 100%, correct. At the SMB level, if you ask most of these people a threshold question of where is your open source software, it’s 10 o’clock, where’s your open source software? And a lot of them don’t know, because they’re SAS-ops companies, right? They don’t they don’t have a server closet, their cloud installation – they run it on SAS. And so the beautiful part about SAS is, one of the nice things about it, is you don’t know what’s behind the curtain, you just know, you log in on the website, and you’ve got all your information there that you need. Are they using an open source database? Are they using, you know, what are they using behind the curtain? A lot of smaller companies don’t know. And as part of their due diligence, they don’t dig that deep. So I could, again, I can empathize with the larger ones, the larger enterprises, though, that’s a problem that is that I think,

Matt Jarvis 7:39
You know, in a lot of those larger enterprises, you’re you’ve got that kind of ingrained culture over a long time in terms of security and about how you consume software. And you know, the hardest problem in security isn’t really about technology at all right? It’s always about people and culture. And I think, you know, probably in a lot of larger organizations, you’ve got kind of, you know, that sort of friction of, well, we’ve always done it like that.

Stephen Hendrick 8:07
Well, you also have a lot of change going on from standpoint of how software is being developed. And I think that’s part of the problem as well, which is that, you know, change is always hard for people. And especially given the rapid evolution of tools and standards, in essence around how we should do security for software. Everything’s changing so quickly, it’s I think, it’s probably hard for people to keep up.

Matt Jarvis 8:33
Because we’ve got these two kind of things happen in almost a perfect storm. At the same time, we’ve got this massive rise in supply chain attacks on open source, because, you know, it’s a victim of its own success. And attackers have realized it’s a lot easier to get into the supply chain than it is to find zero days in end user applications. So you’ve got that going on, where all of a sudden, folks are going, well, everything we do is based on open source, like, what do I do about security? And then, as Steve pointed out, you’ve got this, this ongoing, massive transformation of how we develop software, you know, this superfast high velocity,

Alan Shimel 9:10
I blame DevOps?

Matt Jarvis 9:15
Unless you do, and unless you can transform, you know, someone’s going to eat your lunch, right? Because there’s some hungry competitor behind you who’s disruptive and who does have a superfast software delivery pipeline. They can deliver new features, they know how to analyze the data. And so for a lot of big organizations, you’ve got these two big problems happening right at the same time, because that change in software development requires a completely different approach to security. You know, this space that it’s the thing that sneaks talk about all the time about developers?

Alan Shimel 9:45
I mean, you look at let’s say, the Phoenix Project by Gene Kim, right. And that’s based on a book called The Goal. Yeah, right. And the thing about, so The Goal is about manufacturing, but really the principle behind The Goal, and I think Gene tried to capture that in the Phoenix Project, is that, look, as soon as we kind of erase one bottleneck, we see that next bottleneck right behind it. Don’t think that once you get rid of that bottleneck its smooth sailing, it’s not. We have massively, revolutionarily speeded up the pace of software development. We did it in large part by creating this software factory pipeline, CI/CD, DevOps kind of things. The enabler of that was having this massive library of open source that we can assemble into a very high quality software.

Alan Shimel 10:43
Man, we blew through that roadblock at 150 miles an hour. The wall we hit right after is, wait a second, now, that’s become a huge security problem. Right? So for companies that are developing their own code, this is a major thing? Knowing that though, and still telling me that 30% of the companies don’t have a policy around it, scary. Yeah, it is.

Stephen Hendrick 11:09
Well, let’s we should we should talk about what people are doing about trying to deal with this. Right.

Alan Shimel 11:14
Here’s the good news.

Stephen Hendrick 11:14
So we asked a question, which was, okay, so how do you intend to improve on the situation? What do you what are you doing, and we had quite a long list of responses. Top of the list was, organizations were looking for more intelligent tools from a threat where repetitive security focus. So we’re talking SCA, SAS DAST, IAC, you know, all the usual suspects, and looking really, to those tools to be able to help them improve their security posture. So that was top of the list. That was 59%.

Stephen Hendrick 11:52
And then right behind that at 52% was a strong desire to understand and essentially codified best practices for how to do secure software development. That was really encouraging, because we know all about best practices. Yep. Know exactly what they all are. In fact, David Wheeler, at LF.

Stephen Hendrick 12:14
We had David A. Wheeler. We interviewed David yesterday and we have a follow-up as well.

Stephen Hendrick 12:32
He and I had lunch yesterday, and we were talking about this, because I said, you know, how many best practices do you have? So we know counted them all up – he’s got like, 150, 160. So that’s kind of daunting. And he said, like the last 25 to get to the highest level can take, in some cases, years to master. So this is, this is, despite understanding what these best practices are, it’s still very challenging to wrap your head around what is necessary to be successful there.

Matt Jarvis 13:00
It’ll be good. And partly because, you know, as we were just talking about that, that culture change is such a big part of how you make that transition from, you know, you kind of old school, security as gatekeeper kind of function, to this thing, where we put it to the developers, because the developers are the ones who, you know, you fix it at the developer eyeball before it’s got anywhere near, you know, production. That is the cheapest.

Alan Shimel 13:28
Say 10 to 100x cheaper to do there.

Matt Jarvis 13:30
I mean, we look at the other interesting thing here that’s slightly tangential to this, but is like how many developers there are in the world, right, and how many we anticipate there being you know, there’s something like, I think the anticipation is something like 30 million developers in the world, and there’s only, like, a tiny proportion of security folk.

Alan Shimel 13:51
So I go by GitHub accounts. Right? There’s about 70 plus million GitHub accounts right now. So let’s assume it’s not one-to-one. But I think it’s safe to say this 40 to 45 million developers, probably growing at somewhere in the area of 10% a year.

Matt Jarvis 14:08
And security professionals aren’t aren’t growing at that rate.

Alan Shimel 14:13
So security professionals are growing because we’re starting to see, look, when I came up, you didn’t have a cybersecurity major in college. We’re seeing schools churn out cybersecurity majors. Are they security professionals? I’ll leave it to you. But there are people coming out here who want to work in security, but not anywhere near I mean, you’re talking here in here.

Alan Shimel 14:39
Here’s an interesting thing, though. And I think it’s what’s turning up the heat on all of this, is that this is getting major focus from the White House, from the federal government. The whole world is saying hey, this is a problem. This is a big problem.

Stephen Hendrick 14:58
Well, you know, you got to do something you You know, I did a report last year survey on SBOMs. Yep. And I gotta tell you that factors right into this No, of course, because, you know, we did some stats in this survey on dependencies, you know, both direct and transitive, and found, really, sort of low levels of strong, strong security around, you know, organizations understanding the security posture of all these different dependencies and dependencies of dependencies. Really low numbers there.

Stephen Hendrick 15:34
SBOMs would go so far in helping sort all that out, because, you know, SBOMs, they’re going to give you knowledge about the metadata, it’s gonna give you usability, so you know that you’re licensed to use the stuff, and it’s going to know if it was good, if you trust that not only what you’re looking at for metadata is not falsified, but also understanding quite clearly, you know, what’s been fixed, what hasn’t been fixed from a vulnerability standpoint.

Alan Shimel 15:59
So I’ll tell you over the last two days, we’ve, we’ve done a lot of interviews, but no shortage of people talking about SBOMs and SBOM solutions. I think we’re going to see, just like everything else in technology, we’re gonna see sort of a camp Cambrian explosion of SBOM solutions out there, and then the market will figure out which ones make sense, which ones don’t. My fear is that we, we think SBOMs are a magic bullet for supply chain security, because we have a tendency of doing that in security.,

Matt Jarvis 16:33
Ultimately I think the real challenge here is going to be the chain of trust part of that, right? Because what’s an SBOM at the end of the day, it’s a text file with some stuff.

Alan Shimel 16:43
Oh, no, but they’re, you know, they’re, they’re building some elaborate text files in there.

Stephen Hendrick 16:49
It’s a lot of good metadata. Yeah, but one more point I want to touch on, though, is that the number three issue from the standpoint of doing improvements to your software security posture was more automation. So IAC tools ended up ranking very highly from the standpoint of helping you address that particular need, and just for our audience, IAC (infrastructure as code), right. Okay. So that one actually surprised me, because this whole idea of, you know, developers/manual activities, not only that is a great way to invite in problems. And so more automation, ultimately, is better.

Matt Jarvis 17:33
We did some some work last year as part of our cloud native application security report. And what was really interesting there was, you know, we kind of use high levels of development automation, i.e.automated CI/CD pipelines and all that stuff as a as a proxy for how far along your cloud native journey you are. I think it’s a pretty reasonable proxy to take. And in organizations with those high levels of of deployment automation, for a start, we see much higher levels of adoption of security tooling, because automation gives you lots of places where you can hook in other automation. But, most importantly, we see massive reduction in the time to fix the vulnerabilities. Because through directly through a direct correlation to okay,

Alan Shimel 18:25
I’ve been in security a long time. We had a vulnerability solution company I founded back in 2005. Back then, there was a company called Hercules – Citadel was the company Hercules was the product, right? They were doing, you know, they were pushing automated remediation. There’s several companies today that have automated remediation. For whatever reason, up until now, organizations have been hesitant to adopt automated remediation, because they’re afraid it’s going to break something else if it’s in a totally automated situation. Now, doing this further left in the in the development pipeline, if it’s broken, supposedly, that should come up in testing.

Matt Jarvis 19:16
I mean, this is again what we see when when companies adopt Snky, is like, you know, the the automated remediation part of in terms of automated fix PRs, you know, is, is probably not where people start, but very quickly, they go out.

Alan Shimel 19:33
Yeah, look, this is a no brainer. Yeah, absolutely, because it goes back to what I said before blame DevOps, right. If we are going to automate the CI/CD pipeline, we’re going to automate building software, the answer cannot be that we’re going to manually do security. It just doesn’t work. It’s a disconnect.

Matt Jarvis 19:54
Yeah, I mean, and it’s an anti-pattern in terms of velocity, right. I mean, velocity is the key differentiator for whether, sort of, visitors in the cloud era are going to survive. Absolutely, because if you don’t have velocity, you are probably don.e.

Alan Shimel 20:08
No, but a lesson we learned in security, or we should have learned over the last 25 years, is if we are going to drag our heels and dig our heels in and say no, no, no, no. You know what? The train leaves the station without you. Yeah. So easy to get on board and figure out yes, we can. And here’s how we’ll get out of the way. Right? Lead, follow, get out of the way. Security cannot be the drag on this because velocity is too important.

Matt Jarvis 20:36
And where we see folks who’ve successfully made this transition to developer-first, you see this sort of this change in security teams from kind of being gatekeepers to being enablers.

Alan Shimel 20:41
DevSecOps right there. Yeah, you just the just the the heart of it. That’s it. Steven, anything else?

Stephen Hendrick 21:02
So what’s the answer to this issue of not having a security policy? I mean, is it, Do you need to start with the CISO? Did you start with an OSPO? Do you need, or at least part time roles and people in organizations, you know, in those functions if you are small? I’m not sure what the answer is, but I mean, we need one.

Matt Jarvis 21:27
I think when people think about policies, they think, Oh, this needs to be like 100 page document of some kind, you know, this is it becomes overwhelming, but really a policy can be a one liner.. I mean, we, we have this conversation a lot when people start to adopt security scanning, right, they’ve done no security scanning before, and they scan this software, and they go, Oh, my God, I’ve got like 500 vulnerabilities. What do I do? But you’ve got to just pick a starting point. Right. And I mean, usually, you know, a sensible place would be no critical vulnerabilities that have got a fix in production. Well, there’s a policy right there, right. And it’s three lines. I, and it’s better than having these zeros.,

Alan Shimel 22:06
100% right. I run into this firsthand. People, they hear we need a security policy, they think I need the employee handbook, right, that comes from you know, this thick. You know, it could be one page or five bullet points. Anything that’s critical is worthy to stop production. Anything not critical does not stop production but has to get fixed within 30 days. That’s a policy.

Matt Jarvis 22:32
I mean, there’s plenty of great, templated stuff of usage of open source. You know, this stuff by the way.

Alan Shimel 22:39
I’d love to the OpenSSF have a library of that kind of thing. Yeah.

Stephen Hendrick 22:47
And actually, the good news is once you have policy then automation can follow up pretty quickly. That’s that’s, that’s the right path. You’re right.

Alan Shimel 22:54
Guys, we’ve got our next guest here in the wings. We could talk about this all day, I’m sure. I’d love to, but it wouldn’t be fair to them. Again, you can get this survey over on the Snky site, which is snky.io. Or on the OpenSSF site.

Alan Shimel 23:17
Steven, good work again. I love you surveys. Very good. We are going to take a quick break. We’re going to make up our next guest and we’ll be right back here with live in Austin.

The post Hendrick and Jarvis Talk Software Security appeared first on Linux Foundation.

LEGO and Angel Island

Tue, 07/12/2022 - 03:06

Like many of the folks in open source, the LF’s Kenny Paul is a huge fan of building things out of LEGO. For Kenny however, it goes a bit beyond just opening a box and following the instruction book. In fact, he rarely ever builds anything from a kit, instead building highly complex and detailed models entirely from his imagination. Yes, for you LEGO Movie fans, Kenny is a Master Builder

 When I get a new kit I usually look at it in terms of pure raw material rather than whatever is shown on the box

 “When I get a new kit I usually look at it in terms of pure raw material rather than whatever is shown on the box”, he says with a smile radiating the possibilities. That approach seems to have worked quite well for him for a long time now. Over the holiday season he builds a 120 square foot display in his garage that often draws 300+ people a day, he worked on the Mythbusters’ Giant LEGO Ball episode (#117), he has scale models of farm equipment in the permanent collection of a local museum, and in January of 2020 he finished second in a competition for one of LEGOLand’s coveted LEGO Master Model Builder positions, of which there are only 13 in all of North America. 



1234567
Photos: MythBusters Giant LEGO Ball mid-build, LEGOLand’s LEGO Master Model Builder Competition, and Kenny’s holiday garage display

Angel Island

However, he recently finished a project that he says has been the most difficult and meaningful project he has ever been a part of. The subject matter revolves around a troubling chapter in American history and a small bit of rock and scrub brush in the middle of San Francisco Bay called Angel Island.

Ask your average 4th grader if they have ever heard of Ellis Island and they can probably tell you at least something about the well-known narrative surrounding immigration and the United States. Ask them about Angel Island, however, and you’ll probably get a confused look and a shake of the head.

Although Angel Island was often called, “The Ellis Island of the West” in the early 1900s, it was anything but welcoming. In reality it was established specifically for the purpose of excluding immigration for those of Asian descent and Chinese immigrants in particular. It wasn’t a place for, Give me your tired, your poor, your huddled masses… It was more like, Nope, talk to the hand. 

Japanese Internments

When Japan attacked the US Naval base at Pearl Harbor on December 7th, 1941, Angel Island took on an entirely new role during the early stages of the war, but one that was unfortunately still in line with its original anti-Asian roots. Many people are still unaware that following Pearl Harbor, the US Government, on the orders of President Franklin D. Roosevelt, rounded up thousands of US citizens and put them into internment camps for the duration of the war simply because of their Japanese ancestry. Yes, that’s right. This included US citizens who were officially reclassified as enemies of the state purely based upon their heritage. For the first wave of those who were incarcerated, Angel Island was used as the processing center before they were sent off to one of the infamous internment camps across the US, like Manzanar, Tule Lake, or Heart Mountain

How to educate children about the history?

Remember how we mentioned 4th graders earlier?  Well, learning about California history is a pillar of the 4th grade curriculum here in the Golden State and that is what led to this particular project. The problem? Hundreds of 4th graders tour Angel Island every year – How do you engage them on very painful and hard to understand subject matter like internment?  Well, the folks from the California State Park system and the Angel Island Immigration Station Foundation, which runs the museum there, thought that a LEGO model of the site as it existed during WWII might help bridge that gap.

AIISF reached out to the local LEGO club in the Bay Area in August of 2021 to see if anyone might be interested in volunteering for a project. A number of folks joined the introductory Zoom call, but after hearing the scope of what was being requested, it was clear that this was a long duration project that would take months to complete. After that first meeting, only Kenny and two other members of the club, Johannes van Galen and Nick McConnell, agreed to proceed with the build.

The LEGO Build

The model was unveiled as the center anchor point for the exhibit, “Taken From Their Families; …” in May, which is Asian & Pacific Islander Heritage Month. Measuring 4 feet by 6 feet, it contains an estimated 30,000 LEGO pieces. The trio invested over 400 hours between research, design, procuring the parts, and of course the build itself.

Getting the model to the museum was no easy feat either. It had to be built in sections, moved by van about 60 miles from where it was being constructed, taken over to the island on a state park supply ship, then reassembled and “landscaped” once on site. 


123
The Research

“The research aspect was really fascinating to me”, said Kenny, who was responsible for building all of the buildings. He spent countless hours pouring through archival photos and diagrams and topographic maps provided by the state park and even went as far as looking at records from the Library of Congress in some cases. The goal was to be as accurate as possible while still working within the limitations of scale, plus LEGO part and color availability.  In one case that research took an unexpected turn that as Kenny puts it, “Stood the hairs up on the back of my neck.”  

The largest building in the camp during WWII was still under construction when the war broke out. It replaced a previous building which burned to the ground in 1940. After Pearl Harbor, the new building was rapidly completed and pressed into service. Following the War, it was bulldozed by the Army. The problem was that no one working on the project could figure out what that building actually looked like. Only two grainy photos of the WWII era building could be found and neither photo made sense when compared to the building foundations that can still be seen on the island today. Then Kenny realized a well-known watercolor drawing in the museum’s collection solved the puzzle. The most remarkable aspect of the drawing is that the entire camp is depicted the way it looks from offshore rather than as viewed from the perspective of the detention barracks where prisoners were held. The realization was stunning – it was painted from memory by the artist. It was the way he saw the island the day he steamed into San Francisco Bay from Hawaii as a political prisoner of his own country. Smiling as tears well up in his eyes, Kenny says, “Every time I think about the fact I needed a painting made by one of the very first Japanese Americans arrested during that time to complete a scale model of that same camp 80 years later, it always chokes me up.”  

Every time I think about the fact I needed a painting made by one of the very first Japanese Americans arrested during that time to complete a scale model of that same camp 80 years later, it always chokes me up.

The model is now on permanent display in the same mess hall that was used by the prisoners. For more information on the exhibit, please see https://aiisf.org/taken.

Kenny Paul works as a Senior Technical Community Architect at the Linux Foundation. He currently works on the Open Network Automation Project (ONAP) and LF Networking. His is just one of the many unique backgrounds that make up the people behind open source. To hear more stories, listen to our Untold Stories of Open Source podcast

And on a related aside, this is a gripping and heart-warming story about bonds made at the Heart Mountain Japanese internment camp in Wyoming.

Below are photos of some of Kenny’s favorites builds.


1234
Photos: Some of Kenny’s favorite builds: B-17; Firehouse #7 in Washington, DC, home to the first all-black engine company in the days of departmental segregation between 1919 and 1962; and, LEGO tractors built for a museum display.

Some other stories behind open source
LEGO and Angel Island, , , , , https://www.linuxfoundation.org/wp-content/uploads/LEGO-Angel-Island-display.png 662 1200 Dan Whiting https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg Dan Whiting2022-07-11 12:06:522022-07-11 12:06:53LEGO and Angel IslandUntold Stories of Open Source: Priyanka Sharma, , , , , https://www.linuxfoundation.org/wp-content/uploads/priyanka-sharma-1.png 662 1200 Dan Whiting https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg Dan Whiting2022-06-06 13:04:452022-06-06 13:05:16Untold Stories of Open Source: Priyanka SharmaLinux Foundation Podcast Series: “The Untold Stories of Open Source”, , , , https://www.linuxfoundation.org/wp-content/uploads/brian-behlendorf-featured-image.png 662 1200 Dan Whiting https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg Dan Whiting2022-05-20 05:20:002022-06-06 15:29:14Linux Foundation Podcast Series: “The Untold Stories of Open Source”More Time on Innovating, Less Time on Compliance, , , , , , , https://www.linuxfoundation.org/wp-content/uploads/diagram-of-openchain-process-e1650030891859.png 315 1200 Dan Whiting https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg Dan Whiting2022-04-15 06:58:422022-04-15 06:58:43More Time on Innovating, Less Time on ComplianceA Rarity in Open Source, , , , , https://www.linuxfoundation.org/wp-content/uploads/dream-big-little-one.png 662 1200 Dan Whiting https://live-linux-foundation.pantheonsite.io/wp-content/uploads/lf_logo.svg Dan Whiting2022-04-14 07:34:442022-04-14 10:01:12A Rarity in Open Source

The post LEGO and Angel Island appeared first on Linux Foundation.

Google Summer of Code + Zephyr RTOS

Fri, 07/08/2022 - 05:47

The Google Summer of Code (GSoC) is an international annual program in which Google awards stipends to contributors who successfully complete a free and open source software coding project during the summer. Launched in 2005, GSoC takes place from May to August. Project ideas are submitted by host organizations involved in open source software development, though students can also propose their own project ideas.

This year, the program was opened to anyone 18 years or older – not just students and recent graduates. Participants get paid to write software, with the amount of their stipend depending on the purchasing power parity of the country where they are located.

This is also the first time the Zephyr Project is participating in GSoC under The Linux Foundation umbrella. Please join us in welcoming these contributors and their projects:

Project #1: Arduino module based on Zephyr

1 contributor full-size (350 hours).

Arduino’s popularity is renowned as a popular framework for providing a simplified interface to program embedded devices. Recently, Arduino adopted mbed OS as the base RTOS for some of their newer devices. With that work, they separated out Arduino Core as an independent abstraction layer from Arduino Core for mbed. This opens up the possibility for leveraging Arduino Core on other OSes. The project idea is to create a Zephyr module that leverages the Arduino Core so that a developer can use Zephyr as the underlying OS when they use the Arduino framework on Arduino-compatible devices. The benefits to the user include:

  • Access to Arduino APIs as well as advanced Zephyr capabilities
  • Broader set of devices than the standard Arduino ecosystem thanks to Zephyrs’ device support
  • Ability to re-use Arduino tools like the Arduino IDE and wealth of libraries

Arduino Core is licensed under the GNU Lesser General Public License and Zephyr is licensed under Apache 2. That means this project will most likely need to be developed out of tree and in a separate repo to keep code and license separation. See #22247 for a historic discussion & soburi/arduino-on-zephyr for an earlier attempt prior to the Arduino Core architecture.

The contributor’s task is thus:

  • Implement a bare-bones Module based on Arduino Core that can compile for any target (no functionality, possibly in QEMU)
  • Implement a common peripheral from the Arduino API based on Zephyr such as Serial
  • Target one physical board, such as the Arduino Zero

Mentors:

Code License: LGPL

Contributor Details:

About the contributor: Dhruva is an undergraduate student   majoring in Electrical engineering. He has a broad range of interests from embedded software development to hardware design and has experience in working on SBCs, microcontrollers, and embedded Linux platforms.

Project #2: Apache Thrift Module for Zephyr

1 contributor full-size (350 hours).

Apache Thrift is an IDL specification,RPC framework, and code generator that abstracts away transport and protocol details to let developers focus on application logic.It works across all major operating systems, supports over 27 programming languages, 7 protocols, and 6 low-level transports. Originally developed at Facebook in 2007, it was subsequently shared with the Apache Software Foundation. 

Supporting Thrift in the Zephyr RTOS would benefit the community greatly. It would lead to new software and hardware technologies, new products, and additional means for cloud integration. Thrift can be used over virtually any transport as well and for that reason, it is a natural choice for the many different physical communication layers supported by Zephyr. The project idea is to get the proof-of-concept Thrift for Zephyr Module into shape for upstreaming. To achieve that, the contributor must:

  • Perform additional integration for Thrift features (protocols, transports)
  • Author additional sample applications using supported boards or Qemu
  • Author additional tests and generate coverage reports using the Zephyr Test Framework
  • Ensure the module follows appropriate coding guidelines and satisfies module requirements
  • Contribute any necessary improvements back to the Apache Thrift Project.
  • Contribute any necessary improvements back to the Zephyr Project.

Mentors:

Code License: Apache 2.0.

Contributor Details:

Name: Young

About the contributor: Young is a student majoring in  communication engineering, and he will pursue his Master’s degree in computer engineering. He has a broad range of interests from front-end development to hardware design, and has experience in working on the Web, IoT and embedded platforms. A low-cost single-board computer with a RISC-V 64 processor designed by him in 2021 was reported by several geek media.

The post Google Summer of Code + Zephyr RTOS appeared first on Linux Foundation.

ONE Summit North America, Hosted by LF Networking, Invites Industry Experts Across Access, Edge, Cloud and Core to Collaborate In-Person, November 15-16, 2022

Fri, 07/08/2022 - 00:00
  • LF Networking Announces ONE Summit North America 2022 Call for Proposals  and Registration are Now Open! 
  • ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing network decision makers across Networking, Access, Edge, and Cloud
  • Reinvigorated for 2022, ONE Summit returns in-person November 15-16 in Seattle, Wash. with a more interactive and creative environment enabling attendees to transform, innovate and collaborate together

SAN FRANCISCO, July 7, 2022 LF Networking,which facilitates collaboration and operational excellence across open source networking projects, announced Registration and the Call For Proposals (CFP) for ONE Summit North America 2022 are now open. Taking place in Seattle, Wash. November 15-16, ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing decision makers across 5G, Cloud, Telco, and Enterprise Networking, as well as Edge, Acces, IoT, and Core. 

For anyone using networking and automation to transform business, whether it’s deploying a 5G network, building government infrastructure, or innovating at their industry’s network edge, the ONE Summit collaborative environment enables peer interaction and learning focused on open source technologies that are redefining the ecosystem. As the network is key to new opportunities across Telecommunications, Industry 4.0, Public and Government Infrastructure, the new paradigm will be open. Come join this interactive and collaborative event, the ONE place to learn, innovate, and create the networks our organizations require. 

“We are pleased to host a rejuvenated ONE Summit, which brings the ecosystem together in-person once again,” said Arpit Joshipura, general manager, Networking, Edge, and IoT, the Linux Foundation. “With a shifting industry that must embrace traditional networking now integrated across verticals such as Access, Edge, Core, and Cloud, we are eager to gather to learn, share, and iterate on the future of open collaboration.”

The event will feature an extensive program of 80+ talks covering the most important and timely topics across Networking, Access, Edge, and Cloud, with diverse options for both business and technical sessions. Presentation tracks include Industry 4.0; Security; The New Networking Stack; Operational Deployments (case studies, success & challenges); Emerging Technologies and Business Models; and more. 

The CFP is now open through July 29, 2022.

To register, visit  https://events.linuxfoundation.org/one-summit-north-america/register/. Corporate attendees should register before August 20 for the best rates. 

Developer & Testing Forum

ONE Summit will be followed by a complimentary two day LF Networking Developer and Testing Forum (DTF), a grassroots hands-on event organized by the LF Networking projects. Attendees are encouraged to extend the experience, roll up sleeves, and join the incredible developer community to advance the open source networking and automation technologies of the future. Information on the Spring 2022 LFN Developer & Testing Forum, which took place June 13-16 in Porto, Portugal, is available here.

Sponsor

ONE Summit  is made possible thanks to generous sponsors. For information on becoming an event sponsor, click here or email for more information and to speak to the team.


Press
Members of the press who would like to request a press pass to attend should contact pr@lfnetworking.org

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. Learn more at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

The post ONE Summit North America, Hosted by LF Networking, Invites Industry Experts Across Access, Edge, Cloud and Core to Collaborate In-Person, November 15-16, 2022 appeared first on Linux Foundation.

Morgan Stanley, Microsoft, and Regnosys Break New Ground in RegTech with FINOS

Thu, 07/07/2022 - 04:59

This post originally appeared on the FINOS blog. You can also listen to the latest FINOS podcast with Minesh Patel, Chief Technology Officer at REGnosys, discussing his upcoming talk at the FINOS Open Source in Finance Forum (OSFF) on July 13th in London about “Breaking new ground in RegTech through open source TechSprint innovation”.

In the first quarter of 2022, a multi-organisation, multi-location team of developers planned, scheduled and delivered an ambitious three day “RegTech” collaboration challenge.

The event, dubbed a “TechSprint”, looked to demonstrate how financial institutions could comply with trade reporting rules for the upcoming US CFTC requirements using entirely open-source components.

Why It’s Important

Every year, the financial industry spends billions trying to comply with often complex data reporting requirements. For every reporting regime and jurisdiction, firms must typically sift through hundreds of pages of legal text, which they must then manually interpret and code in their IT systems.

As a result, while many financial institutions share the same reporting obligations, they usually implement their logic in slightly different ways due to fragmented technology approaches, adding to risks and costs.

The field is ripe for a shake-up by “RegTech”, i.e. the application of technology to address regulatory challenges. In particular, the ability to build and store the reporting logic in an open-source and technology-agnostic way, and to run it based on open-source components too, could reap huge efficiency benefits for the industry.

Current Landscape

This RegTech space is one that FINOS has been actively investing in. In 2020, FINOS approved the contribution of the Regulation Innovation SIG, a Special Interest Group dedicated to the applications of open source to regulatory problems. Morphir, an open-source project contributed by Morgan Stanley, is positioned as a key component of that Reg SIG. Morphir allows to represent, store, share and process business logic in an implementation-agnostic way, including the types of rules and calculations often found in regulations.

The industry is also getting better organised to tackle pressing regulatory challenges more collaboratively. Under the auspices of the industry’s existing trade associations, the Digital Regulatory Reporting (DRR) programme is a mutualized, industry-wide initiative addressing the global trade reporting requirements. Those reporting regimes are being updated across the G20 and DRR starts with the US CFTC revised swap data reporting rules that go live this year. DRR involves industry participants working together to deliver an open-source, machine-executable expression of the reporting rules.

These two initiatives, Morphir and DRR, looked like a perfect match. A like-minded team of developers sitting across organisations decided to undertake the challenge of integrating them, thus demonstrating that reporting rules can be developed, executed and validated using entirely open-source components – all under three days!

Approach

Technical

In DRR, the rule logic is expressed in a Domain-Specific Language called the Rosetta DSL and then translated into executable code through an automated “code generation” process. The reporting rules’ inputs are modelled according to the Common Domain Model (CDM), an initiative initially championed by the International Swaps and Derivatives Association (ISDA), now joined by other trade associations, and involving many industry participants including buy- and sell-side firms.

The Rosetta DSL and its associated code generators, currently being proposed for contribution to FINOS, are open-source projects developed by technology firm REGnosys, which provides the software platform for the DRR and CDM programme.

The main objective of the TechSprint was to develop a Rosetta-to-Morphir code generator. This would demonstrate that Morphir can be used as a target for storing and executing the body of rules in DRR and that it produces results that are consistent with Rosetta. In addition, the TechSprint looked to provide a formal verification mechanism for the DRR code using Bosque, another open-source project developed by Microsoft that is already integrated with Morphir.

Scope

The first trade reporting regime available in DRR is the CFTC Rewrite, which is rolling out in the US this year. The TechSprint focused on handling a couple of CFTC reportable fields to demonstrate the Rosetta-Morphir-Bosque integration.

Logistics

Building on our proven approach seen over the last two years with the Legend pilot and the Legend hosted instance, the event was run as a “task-force” where teams sitting across organisations’ boundaries collaborated and shared knowledge on their respective open-source projects, all under FINOS’s sponsorship.

In total, seven representatives from three teams at Morgan Stanley, Microsoft and REGnosys have worked together for three days across three separate locations in the UK, Ghana and the US.

Given the time zone differences, the TechSprint was held virtually, starting with the UK/Ghana shift and closing with the NY shift. The teams were mostly self-organised, with regular checkpoints throughout the day.

Substantial Results at Record Speed

In just three days, a Rosetta-to-Morphir code generator has been developed successfully. Whilst not complete, it has been shown to handle increasingly complex logic from Rosetta. REGnosys is integrating this deliverable back into Rosetta’s main open-source code-base.

A couple of in-scope reportable fields were successfully tested by running the Morphir-Scala engine on a sample trade population and displayed in a UI, matching their expected results in Rosetta. The Morphir UI showed how the reporting logic stored in Morphir could be represented graphically.

Finally, the Bosque validation layer was successfully applied to the code generated from Rosetta, opening the way to a formal verification method for the rules developed in DRR.

Take-Aways and Next Steps

One of the most interesting take-aways from this TechSprint event was its task-force format, which allowed the teams to perform at their level best. This format could serve as a template for future “open innovation” initiatives engaging the FINOS community.

The key ingredients of success were:

  • A specific and tangible deliverable
  • Collaboration, not competition, on that shared objective
  • Diversity of participants, all goal-oriented
  • Clear responsibilities of the different team members
  • Careful preparation and planning
  • A “safe space” to contribute in open-source

As a next step, the TechSprint team will be demonstrating the result of their work at the upcoming Open Source in Finance Forum in London (July 13th). Those results will be encapsulated into a video that will be made publicly available.

The Morphir-to-Rosetta code generator delivered during the TechSprint is also included in a formal open-source contribution to FINOS. This will create a first bridge between the on-going DRR industry programme and the wider FINOS community, allowing to connect it to similar initiatives taking place under the Reg SIG.

Given interest and community engagement in that group, further open innovation events involving multiple firms could be run along a similar format.

The potential benefits of open collaboration in the regulatory space are massive. This TechSprint demonstrates how new ground can be broken when barriers tumble down.

Authors:

The post Morgan Stanley, Microsoft, and Regnosys Break New Ground in RegTech with FINOS appeared first on Linux Foundation.

The Impressive Scope of the Linux Foundation in the 21st Century Digital Economy

Tue, 07/05/2022 - 22:23

This post was originally published on June 30, 2022 on Irving Wladawsky-Berger’s blog

Last week, the Linux Foundation held its North America Open Source Summit in Austin. The week-long summit included a large number of breakout sessions as well as several keynotes. Open Source Summit Europe will take place in Dublin in September and Open Source Summit Japan in Yokohama in December.

I’ve been closely involved with open, collaborative innovation and open source communities since the 1990s. In particular, I was asked to lead a new Linux initiative that IBM launched in January of 2000 to embrace Linux across all the company’s products and services.

At the time, Linux had already been embraced by the research, Internet, and supercomputing communities, but many in the commercial marketplace were perplexed by IBM’s decision. Over the next few years, we spent quite a bit of effort explaining to the business community why we were supporting Linux, which included a number of Linux commercials like this one with Muhammad Ali that ran in the 2006 Super Bowl. IBM also had to fight off a multi-billion dollar lawsuit for alleged intellectual property violations in its contributions to the development of Linux. Nevertheless, by the late 2000s, Linux had crossed the chasm to mainstream adoption, having been embraced by a large number of companies around the world.

In 2000, IBM, along with HP, Intel, and several other companies formed a consortium to support the continued development of Linux, and founded a new non-profit organization, the Open Source Development Labs (OSDL). In 2007, OSDL merged with the Free Standards Group (FSG) and became the Linux Foundation (LF). In 2011, the LF marked the 20th anniversary of Linux at its annual LinuxCon North America conference. I had the privilege of giving one of the keynotes at the conference in Vancouver, where I recounted my personal involvement with Linux and open source.

Over the next decade, the LF went through a major expansion. In 2017, its annual conferences were rebranded Open Source Summits to be more representative of LF’s more general open source mission beyond Linux. Then in April of 2021, the LF announced the formation of Linux Foundation Research, a new organization to better understand the opportunities to collaborate on the many open source activities that the LF was by then involved in. Hilary Carter joined the LF as VP of Research and leader of the new initiative.

A few months later, Carter created an Advisory Board to provide insights into emerging technology trends that could have a major impact on the growing number of LF open source projects, as well as to explore the role of open source to help address some of the world’s most pressing challenges. I was invited to become a member of the LF Research Advisory Board, an invitation I quickly accepted.

Having retired from IBM in 2007, I had become involved in a number of new areas, – such as cloud, blockchain, AI, and the emerging digital economy. As a result, I had not been much involved with the Linux Foundation in the 2010s, and continued to view LF as primarily overseeing the development of Linux. But, once I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities. Let me summarize what I learned.

Once I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities.

According to its website, the LF now has over 1,260 company members, including 14 Platinum and 19 Gold, and supports hundreds of open source projects. Some of the projects are focused on technology horizontals, others on industry verticals, and many are subprojects within a large open source project.

Technology horizontal areas include AI, ML, data & analytics; additive manufacturing; augmented & virtual reality; blockchain; cloud containers & virtualization; IoT & embedded; Linux kernel; networking & edge; open hardware; safety critical systems; security; storage; system administration; and Web & application development. Specific infrastructure projects include OpenSSF, – the Open Source Software Security Foundation; LF AI & Data, – whose mission is to build and support open source innovations in the AI & data domains ; and the Hyperledger Foundation, – which hosts a number of enterprise-grade blockchain subprojects, such as Hyperledger Cactus, – to help securely integrate different blockchains; Hyperledger Besu, – an Ethereum client for permissioned blockchains; and Hyperledger Caliper, – a blockchain benchmark tool to measure performance.

Industry vertical areas, include automotive & aviation; education & training; energy & resources; government & regulatory agencies; healthcare; manufacturing & logistics; media & entertainment; packaged goods; retail; technology; and telecommunication. Industry focused projects include LFEnergy, – aimed at the digitization of the energy sector to help reach decarbonization targets; Automotive Grade Linux, – to accelerate the development and adoption of a fully open software stack for the connected car; Chips Alliance, – to accelerate open source hardware development; Civil Infrastructure Platform, – to enable the development and use of software building blocks for civil infrastructure; LF Public Health, – to improve global health equity and innovation; and Academy Software Foundation, – which is focused on the creation of an open source ecosystem for the animation and visual effects industry and hosts a number of related subprojects such as OpenColorIO, – a color management framework; OpenCue, – a render management system; and OpenEXR, – the professional-grade image storage format of the motion picture industry.

The LF estimates that its sponsored projects have developed over one billion lines of open source code which support a significant percentage of the world’s mission critical infrastructures. These projects have created over $54 billion in economic value. A recent study by the European Commission estimated that in 2018, the economic impact of open source across all its member states was between €65 and €95 billion. To better understand the global economic impact of open source, LF Research is sponsoring a study led by Henry Chesbrough, UC Berkeley professor and fellow member of the Advisory Board.

Open source advances are totally dependent on the contributions of highly skilled professionals. The LF estimates that over 750 thousand developers from around 18 thousand contributing companies have been involved in its various projects around the world. To help train open source developers, the LF offers over 130 different courses in a variety of areas, including systems administration, cloud & containers, blockchain, and IoT & embedded development, as well as 25 certification programs.

In addition, the LF, in partnership with edX, – the open online learning organization created by Harvard and MIT – has been conducting an annual web survey of open source professionals and hiring managers to identify the latest trends in open source careers, the skills that are most in demand, what motivates open source professionals, how employers can attract and retain top talent, as well as diversity issues in the industry.

The 10th Annual Open Source Jobs Report was just published in June of 2022. The report found that there remains a shortage of qualified talent – 93% of hiring managers have difficulty finding experienced open source professionals; compensation has become a differentiating factor – 58% of managers have given salary increases to retain open source talent; certifications have hit a new level of importance – 69% of hiring managers are more likely to hire certified open source professionals; 63% of open source professionals believe open source runs most modern technology; and cloud skills are the most in demand, followed by Linux, DevOps, and security.

Finally, in her Austin keynote, Hilary Carter presented 10 quick facts about open source from LF Research:

  • 53% of survey respondents contribute to open source because “it’s fun”;
  • 86% of hiring managers say hiring open source talent is a priority for 2022;
  • 2/3 of developers need more training to do their jobs;
  • The most widely used open source software is developed by only a handful of contributors, – 136 developers were responsible for more than 80% of the lines of code added to the top 50 packages;
  • 45% of respondents reported that their employers heavily restrict or prohibit contributions to open source projects whether private or work related;
  • 47% of organizations surveyed are using software bill of materials (SBOMs) today;
  • “You feel a sense of community and responsibility to shepherd this work and make it the best it can be;
  • 1 in 5 professionals have been discriminated against of feel unwelcome;
  • People who don’t feel welcome in open source are from disproportionately underrepresented groups;
  • “When we have multiple people with varied backgrounds and opinions, we get better software”.

“Open source projects are here to stay, and they play a critical role in the ability for most organizations to deliver products and services to customers,” said the LF in its website. “As an organization, if you want to influence the open source projects that drive the success of your business, you need to participate. Having a solid contribution strategy and implementation plan for your organization puts you on the path towards being a good corporate open source citizen.”

The post The Impressive Scope of the Linux Foundation in the 21st Century Digital Economy appeared first on Linux Foundation.

How Microservices Work Together

Fri, 07/01/2022 - 22:03

The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that actual tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The real world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balance between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two general rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their real world loads and communication needs will be.

The post How Microservices Work Together appeared first on Linux Foundation.

Ag-Rec: Improving Agriculture Around the World with Open Source Innovation

Fri, 07/01/2022 - 04:13

One of the first projects I noticed after starting at the Linux Foundation was AgStack. It caught my attention because I have a natural inclination towards farming and ranching, although, in reality, I really just want a reason to own and use a John Deere tractor (or more than one). The reality is the closest I will ever get to being a farmer is my backyard garden with, perhaps, some chickens one day. But I did work in agriculture policy for a number of years, including some time at USDA’s Natural Resources Conservation Service. So, AgStack piqued my interest. Most people don’t really understand where their food comes from, the challenges that exist across the globe, and the innovation that is still possible in agriculture. It is encouraging to see the passion and innovation coming from the folks at AgStack.

Speaking of that, I want to dig into (pun intended) one of AgStacks’ projects, Ag-Rec.

Backing up a bit, in the United States, the U.S. Department of Agriculture operates a vast network of cooperative extension offices to help farmers, ranchers, and even gardeners improve their practices. They have proven themselves to be invaluable resources and are credited with improving agriculture practices both here in the U.S. and around the globe through research, information sharing, and partnerships. Even if you aren’t a farmer, they can help you with your garden, lawn, and more. Give them a call – almost every county has an office.

The reality with extension education is that it is still heavily reliant on individuals going to offices and reading printed materials or PDFs. It could use an upgrade to help the data be more easily digestible, to make it quicker to update, to expand the information available, and to facilitate information sharing around the world. Enter Ag-Rec. 

I listened to Brandy Byrd and Gaurav Ramakrishna, both with IBM, present about Ag-Rec at the Open Source Summit 2022 in Austin, Texas. 

Brandy is a native of rural South Carolina, raised in an area where everyone farmed. She recalled some words of wisdom her granddaddy always said, “Never sell the goose that laid the golden egg.” He was referring to the value of the farmland – it was their livelihood. She grew up seeing firsthand the value of farms, and she was already familiar with the value of the information from the extension service and of information sharing among farmers and ranchers beyond mornings at the local coffee shop. But she also sees a better way. 

The vision of Ag-Rec is a framework where rural farmers from small SC towns to anywhere in the world have the same cooperative extension framework where they can get info, advice, and community. They don’t have to go to an office or have a physical manual. They can access a wealth of information and that can be shared anywhere, anytime. 

On top of that, by making it open source, anyone can use the framework so anyone can build applications and make the data available in new and useful ways. Ag-Rec is providing the base for even more innovation. Imagine the innovation we don’t know is possible. 

The Roadmap

Brandy and Gaurav shared about how Ag-Rec is being built and how developers, UI experts, agriculture practices experts, end users, and others can help contribute. When the recording of the presentation is available we will share that here. You can also go over to Ag-Rec’s GitHub for more information and to help. 

Here is the current roadmap: 

Immediate

  • Design and development of UI with Mojoe.net
  • Plant data validation and enhancements
  • Gather requirements to provision additional Extensive Service recommendation data
  • Integrate User Registry for authentication and authorization

Mid-term

  • Testing and feedback from stakeholders
  • Deploy the solution on AgStack Cloud
  • Add documentation for external contribution and self-deployment

Long-term

  • Invite other Extension Services and communities
  • Iterate and continuous improvement

I, for one, am excited about the possibility of this program to help improve crop production, agricultural-land conservation, pest management, and more around the world. Farms feed the world, fuel economies, and so much more. With even better practices, their positive impact can be even greater while helping conserve the earth’s resources. 

The Partners


In May 2021, the Linux Foundation launched the AgStack Foundation to “build and sustain the global data infrastructure for food and agriculture to help scale digital transformation and address climate change, rural engagement, and food and water security.”  Not long after, IBM, Call for Code and Clemson University Cooperative Extension “sought to digitize data that’s been collected over the years, making it accessible to anyone on their phone or computer to search data and find answers they need.” AgStack “way to collaborate with and gain insights from a community of people working on similar ideas, and this helped the team make progress quickly.” And Ag-Rec was born. 

A special thank you to the core team cultivating (pun intended) this innovation: 

Brandy Byrd, IBM

Gaurav Ramakrishna, IBM

Sumer Johal, AgStack

Kendall Kirk, Clemson University

Mallory Douglass, Clemson University

Mojoe.net Resources

Call for Code and AgStack open-source Ag Recommendations

Ag-Rec GitHub

AgStack Foundation

AgStack Slack

Presentation at Open Source Summit North America 2022 (YouTube link available soon)

The post Ag-Rec: Improving Agriculture Around the World with Open Source Innovation appeared first on Linux Foundation.

Delta Lake project announces the availability of 2.0 Release Candidate

Wed, 06/29/2022 - 00:30
New features bringing unmatched query performance to open data lakehouses

Today, the Delta Lake project announced the Delta Lake 2.0 release candidate, which includes a collection of new features with vast performance and usability improvements. The final release of Delta Lake 2.0 will be made available later this year.

Delta Lake has been a Linux Foundation project since October 2019 and is the open storage layer that brings reliability and performance to data lakes via the “lakehouse architectures”, the best of both data warehouses and data lakes under one roof. In the past three years, lakehouses have become an appealing solution to data engineers, analysts, and data scientists who want to have the flexibility to run different workloads on the same data with minimal complexity and no duplication – from data analysis to the development of machine learning models. Delta Lake is the most widely-used lakehouse format in the word and currently sees over 7M downloads per month (and continues to grow).

Delta Lake 2.0 will bring some major improvements to query performance for Delta Lake users, such as support for change data feed, Z-order clustering, idempotent writes to Delta tables, column dropping, and many more (get more details in the Delta Lake 2.0 RC release notes). This enables any organization to build highly performant lakehouses for a wide range of data and AI use cases.

The announcement of Delta Lake 2.0 came on stage during Data + AI Summit 2022 keynote as Michael Armbrust, distinguished engineer at Databricks and a co-founder of the Delta Lake project, showed how the new features will dramatically improve performance and manageability compared to previous versions and other storage formats. Databricks had initially open sourced Delta Lake and has, with the Delta Lake community, been continuously contributing new features to the project. The latest set of features included in v2.0 have been first made available to Databricks customers, ensuring they are “battle-tested” for production workloads before being contributed to the project.

Databricks is not the only organization actively contributing to Delta Lake – developers from over 70 different organizations have been collaborating and contributing new features and capabilities.

“The Delta Lake project is seeing phenomenal activity and growth trends indicating the developer community wants to be a part of the project. Contributor strength has increased by 60% during the last year and the growth in total commits is up 95% and the average line of code per commit is up 900%. We are seeing this upward velocity from contributing organizations like Uber Technologies, Walmart, and CloudBees, Inc., among others,” 

— Executive Director of the Linux Foundation, Jim Zemlin. 

The Delta Lake community is inviting you to explore Delta Lake and join the community. Here are a few useful links to get you started:

The post Delta Lake project announces the availability of 2.0 Release Candidate appeared first on Linux Foundation.

Open Programmable Infrastructure: 1+1=3

Mon, 06/27/2022 - 21:31

At last week’’s Open Source Summit North America, Robin Ginn, Executive Director of the OpenJS Foundation, relayed a principle her mentor taught: “1+1=3”. No, this isn’t ‘new math,’ it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work. 

This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project. 

Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the Open Programmable Infrastructure Project. As I was reading up on it, they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?” 

What are DPUs/IPUs? 

First – and this is important – they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.

In most personal computers, you have a separate graphic processing unit(s) that helps the central processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately. 

Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best. 

In order words, 1+1=3. 

DPUs/IPUs are highly customizable

While separate processing units have been around for some time, like your PC’s GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly. 

Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasn’t too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a virtual server in a matter of minutes for virtually no initial cost. 

DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability. 

What is Open Programmable Infrastructure (OPI)?

OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”

In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.

I, for one, can’t wait to see the innovation.

A special thanks to Yan Fisher of Red Hat for helping me understand open programmable infrastructure concepts. He and his colleague, Kris Murphy, have a more technical blog post on Red Hat’s blog. Check it out. 

For more information on the OPI Project, visit their website and start contributing at https://github.com/opiproject/opi.  

Click here to add your own text

The post Open Programmable Infrastructure: 1+1=3 appeared first on Linux Foundation.

Sharing Health Data while Preserving Privacy: The Cardea Project

Fri, 06/24/2022 - 05:29

In a new white paper, the Cardea Project at Linux Foundation Public Health demonstrates a complete, decentralized, open source system for sharing medical data in a privacy-preserving way with machine readable governance for establishing trust.

The Cardea Project began as a response to the global Covid-19 pandemic and the need for countries and airlines to admit travelers. As Covid shut down air travel and presented an existential threat to countries whose economies depended on tourism, SITA Aero, the largest provider of IT technology to the air transport sector, saw decentralized identity technology as the ideal solution to manage a proof of Covid test status for travel.

With a verifiable credential, a traveler could hold their health data and not only prove they had a specific test at a specific time, they could use it—or a derivative credential—to prove their test status to enter hotels and hospitality spaces without having to divulge any personal information. Entities that needed to verify a traveler’s test status could, in turn, avoid the complexity of direct integrations with healthcare providers and the challenge of complying with onerous health data privacy law.

Developed by Indicio with SITA and the government of Aruba, the technology was successfully trialed in 2021 and the code specifically developed for the project was donated to Linux Foundation Public Health (LFPH) as a way for any public health authority to implement an open source, privacy-preserving way to manage Covid test and vaccination data. The Cardea codebase continues to develop at LFPH as Indicio, SITA, and the Cardea Community Group extend its features and applications beyond Covid-related data.

On May 22, 2022 at the 15th KuppingerCole European Identity and Cloud Conference in Berlin, SITA won the Verifiable Credentials and Decentralized Identity Award for its implementation of decentralized identity in Aruba.

The new white paper from the Cardea Project provides an in-depth examination of the background to Cardea, the transformational power of decentralized identity technology, how it works, the implementation in Aruba, and how it can be deployed to authenticate and share multiple kinds of health data in privacy-preserving ways. As the white paper notes:

“…Cardea is more than a solution for managing COVID-19 testing; it is a way to manage any health-related process where critical and personal information needs to be shared and verified in a way that enables privacy and enhances security. It is able to meet the requirements of the 21st Century Cures Act and Europe’s General Data Protection Regulation, and in doing so enable use cases that range from simple proof of identity to interoperating ecosystems encompassing multiple cloud services, organizations, and sectors, where data needs to be, and can be, shared in immediately actionable ways.

Open source, interoperable decentralized identity technology is the only viable way to manage both the challenges of the present—where entire health systems can be held at ransom through identity-based breaches—and the opportunities presented by a digital future where digital twins, smart hospitals, and spatial web applications will reshape how healthcare is managed and delivered.”

The white paper is available here. The community development group meets weekly on Thursdays at 9:00am PST—please join us!

This article was originally published on the Linux Foundation Public Health project’s blog


The post Sharing Health Data while Preserving Privacy: The Cardea Project appeared first on Linux Foundation.

Ensuring Patents Foster Innovation in Open Source

Fri, 06/24/2022 - 02:09

So, I am old enough to remember when the U.S. Congress temporarily intervened in a patent dispute over the technology that powered BlackBerries. A U.S. Federal judge ordered the BlackBerry service to shutdown until the matter was resolved, and Congress determined that BlackBerry service was too integral to commerce to be allowed to be turned off. Eventually, RIM settled the patent dispute and the BlackBerry rode off into technology oblivion

I am not here to argue the merits of this nearly 20-year-old case (in fact, I coincidentally had friends on both legal teams), but it was when I was introduced to the idea of companies that purchase patents with the goal of using this purchased right to extract money from other companies. 

Patents are an important legal protection to foster innovation, but, like all systems, it isn’t perfect. 

At this week’s  Open Source Summit North America, we heard from Kevin Jakel with Unified Patents. Kevin is a patent attorney who saw the damage being done to innovation by patent trolls – more kindly known as non-practicing entities (NPEs). 

Kevin points out that patents are intellectual property designed to protect inventions, granting a time-bound legal monopoly, but they are only a sword, not a shield. You can use it to stop people, but it doesn’t give you a right to do anything. He emphasizes, “You are vulnerable even if you invented something. Someone can come at you with other patents.” 

Kevin has watched a whole industry develop where patents are purchased by other entities, who then go after successful individuals or companies who they claim are infringing on the patents they now legally own (but is not something they invented). In fact, 88% of all high-tech patent litigation is from an NPE.

NPEs are rational actors using the legal system to their advantage, and they are driven by the fact that almost all of the time the defendant decides to settle to avoid the costs of defending the litigation. This perpetuates the problem by both reducing the risk to the NPEs and also giving them funds to purchase additional patents for future campaigns. 

In regards to open source software, the problem is on the rise and is only going to get worse without strategic, consistent action to combat it.

Kevin started Unified Patents with the goal of solving this problem without incentivizing further NPE activity. He wants to increase the risk for NPEs so that they are incentivized to not pursue non-existent claims. Because NPEs are rational actors, they are going to weigh risks vs. rewards before making any decisions. 

How does Unified Patents do this? They use a three-step process: 

  • Detect – Patent Troll Campaigns
  • Disrupt – Patent Troll Assertions
  • Deter – Further Patent Troll Investment 

Unified Patents works on behalf of 11 technology areas (they call them Zones). They added an Open Source Zone in 2019 with the help of the Linux Foundation, Open Invention Network, and Microsoft. They look for demands being filed in court, and then they selectively pick patent trolls out of the group and challenge them, attempting to disrupt the process. They take the patent back to the U.S. Patent and Trademark Office and see if the patent should have ever existed in the first place. Typically, patent trolls look for broad patents so they can sue lots of companies, making their investment more profitable and less risky. This means it is so broad that it probably should never have been awarded in the first place. 

The result – they end up killing a lot of patents that should have never been issued but are being exploited by patent trolls, stifling innovation. The goal is to slow them down and eventually bring them to a stop as quickly as they can. Then, the next time they go to look for a patent, they look somewhere else.

And it is working. The image below shows some of the open source projects that Unified Patents has actively protected since 2019.

The Linux Foundation participates in Unified Patents’ Open Source Zone to help protect the individuals and organizations innovating every day. We encourage you to join the fight and create a true deterrence for patent trolls. It is the only way to extinguish this threat. 

Learn more at unifiedpatents.com/join

And if you are a die-hard fan of the BlackBerry’s iconic keyboard, my apologies for dredging up the painful memory of your loss. 

The post Ensuring Patents Foster Innovation in Open Source appeared first on Linux Foundation.

Open Source Brings Good Things to Life

Thu, 06/23/2022 - 05:51

If you are interested in online and in-person training and certifications in open source software development and key open source software, such as Linux and Kubernetes, see our special discount just for readers of this post. Scroll to the end.

.avia-image-container.av-l4r7y18w-23bd95cd1b4c332d68794621edf0f908 .av-image-caption-overlay-center{ color:#ffffff; } #top .flex_column.av-l4r8i0um-ffd5f33c11d6b8121ad0af9aa8165bbb{ margin-top:0px; margin-bottom:0px; } .responsive #top #wrap_all .flex_column.av-l4r8i0um-ffd5f33c11d6b8121ad0af9aa8165bbb{ margin-top:0px; margin-bottom:0px; }

Tomorrow night, in the skies over Congress Bridge in Austin, Texas, 300 drones will work in concert to provide a lightshow to entertain but also inform about the power of open source software to drive innovation in our world, making an impact in every life, every day.

#top .flex_column.av-l4r8idp6-d2d10d1a72bde363213590ecfac5a1b2{ margin-top:0px; margin-bottom:0px; } .responsive #top #wrap_all .flex_column.av-l4r8idp6-d2d10d1a72bde363213590ecfac5a1b2{ margin-top:0px; margin-bottom:0px; } #top .flex_column.av-ee45y-db8219349ecb83f3fecf044f511bb21a{ margin-top:0px; margin-bottom:5px; } .responsive #top #wrap_all .flex_column.av-ee45y-db8219349ecb83f3fecf044f511bb21a{ margin-top:0px; margin-bottom:5px; }

Backing up a bit, open source software often conjures up inaccurate visions and presumptions that just aren’t true. No need to conjure those up – we all know what they are. The reality is that open source software (OSS) has transformed our world and become the backbone of our digital economy and the foundation of our digital world. 

The reality is that open source software (OSS) has transformed our world and become the backbone of our digital economy and the foundation of our digital world. 

Some quick, fun facts

  • In vertical software stacks across industries, open source penetration ranges from 20 to 85 percent of the overall software used
  • Linux fuels 90%+ of web servers and Internet-connected devices
  • The Android mobile operating system is built on the Linux kernel
  • Immensely popular libraries and tools to build web applications, such as: AMP, Appium, Dojo, jQuery, Marko, Node.js and so many more are open source
  • The world’s top 100 supercomputers run Linux
  • 100% of mainframe customers use Linux
  • The major cloud-service providers – AWS, Google, and Microsoft – all utilize open-source software to run their services and host open-source solutions delivered through the cloud

Open source software is about organizations coming together to collectively solve common problems so they can separately innovate and differentiate on top of the common baseline. They see they are better off pooling resources to make the baseline better. Sometimes it is called “coopetition.” It generally means that while companies may be in competition with each other in certain areas, they can still cooperate on others.

I borrowed from a well-known tagline from my childhood in the headline – open source does bring good things to life. 

Fueling Drone Innovation 

Drones were introduced to the world through military applications and then toys we could all easily fly (well, my personal track record is abysmal). But the reality is that drones are seeing a variety of commercial applications, such as energy facility inspection for oil, gas, and solar, search and rescue, firefighting, and more, with new uses coming online all of the time. We aren’t at The Jetsons level yet, but they are making our lives easier and safer (and some really cool aerial shots).

Much of that innovation comes from open source coopetition. 

The Linux Foundation hosts the Dronecode Foundation, which fosters open source code and standards critical to the worldwide drone industry. In a recent blog post, the general manager, Ramón Roche, discusses some of the ways open source has created an ecosystem of interoperability,  which leads to users having more choice and flexibility. 

Building the Foundation

Ramón recounts how it all started with the creation of Pixhawk, open standards for drone hardware, with the goal to make drones fly autonomously using computer vision. Working to overcome the lack of computing power and technology in 2008, Lorenz Meier, then a student, set out to build the necessary flight control software and hardware. Realizing the task’s scale, he sought the help of fourteen fellow students, many of whom were more experienced than him, to make it happen. They built Pixhawk and kick started an open source community around various technologies. It, “enabled talented people worldwide to collaborate and create a full-scale solution that was reusable and standardized. By giving their technology a permissive open source license, they opened it to everyone for use and collaboration.”

Benefits of Openness in the Real World

The innovation and technological backbone we see in drones is thanks to open software, hardware, and standards. Dronecode’s blog has interviews with Max Tubman of Freefly Systems talks about how open standards are enabling interoperability of various payloads amongst partners in the Open Ecosystem. Also, Bobby Watts of Watts Innovation explains the power of standardization and how it has streamlined their interoperability with other ecosystem partners like Gremsy and Drone Rescue Systems.

The innovation and technological backbone we see in drones is thanks to open software, hardware, and standards

Check out both interviews here and read about what is next.

The story of open source driving innovation in the drone industry is just one of thousands of examples of how open source is driving global innovation. Whether you know it or not, you use open source software every minute of every hour of every day.



Training promo

Use promo code DRONE25 here to receive up to 25% off of Linux Foundation’s training, taken by millions of students around the world. Expires on June 30, 2022. View the whole catalog, from AI and blockchain to web and application development, we have something for you.  

The post Open Source Brings Good Things to Life appeared first on Linux Foundation.

Open Source Technology Careers Become More Lucrative as Open Source Software Becomes Dominant and Talent Gaps Persist

Wed, 06/22/2022 - 21:18
The tenth annual Open Source Jobs Report from the Linux Foundation and edX was released today, examining trends in open source hiring, retention, and training

SAN FRANCISCO – June 22, 2022The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and edX, a leading global online learning platform from 2U, Inc. (Nasdaq: TWOU), have released the 10th Annual Open Source Jobs Report, examining the demand for open source talent and trends among open source professionals.

The need for open source talent is strong in light of continuing cloud adoption and digital transformation across industries. As the COVID pandemic wanes, both retention and recruitment have become more difficult than ever, with 73% of professionals reporting it would be easy to find a new role and 93% of employers struggling to find enough skilled talent. Although the majority of open source professionals (63%) reported their employment did not change in the past year, one-in-three did report they either left or changed jobs, which puts additional pressure on employers trying to hold onto staff with necessary skills. While this may not reach levels of a “Great Resignation”, this turnover is putting more pressure on companies.

“Every business has struggled with recruiting and retaining talent this past year, and the open source industry has been no different,” said Linux Foundation Executive Director Jim Zemlin. “Organizations that want to ensure they have the talent to meet their business goals need to not only differentiate themselves to attract that talent, but also look at ways to close the skills gap by developing net new and existing talent. This report provides insights and actionable steps they can take to make that happen.”

“This year’s report found that certifications have become increasingly important as organizations continue to look for ways to close skills gaps. We see modular, stackable learning as the future of education and it’s promising to see employers continuing to recognize these alternative paths to gain the skills needed for today’s jobs,” said Anant Agarwal, edX Founder and 2U Chief Open Education Officer.

The tenth annual Open Source Jobs Report examines trends in open source careers, which skills are most in-demand, the motivation for open source professionals, and how employers attract and retain qualified talent. Key findings from the Open Source Jobs Report include: 

  • There remains a shortage of qualified open source talent: The vast majority of employers (93%) report difficulty finding sufficient talent with open source skills. This trend is not going away with nearly half (46%) of employers planning to increase their open source hiring in the next six months, and 73% of open source professionals stating it would be easy to find a new role should they choose to move on.
  • Compensation has become a greater differentiating factor: Financial incentives including salary and bonuses are the most common means of keeping talent, with two-in-three open source professionals saying a higher salary would deter them from leaving a job. With flex time and remote work becoming the industry standard, lifestyle benefits are becoming less of a consideration, making financial incentives a bigger differentiator.
  • Certifications hit new levels of importance: An overwhelming number of employers (90%) stated that they will pay for employees to obtain certifications, and 81% of professionals plan to add certifications this year, demonstrating the weight these credentials hold. The 69% of employers who are more likely to hire an open source professional with a certification also reinforces that in light of talent shortages, prior experience is becoming less of a requirement as long as someone can demonstrate they possess the skills to do the job.
  • Cloud’s continued dominance: Cloud and container technology skills remain the most in demand this year, with 69% of employers seeking hires with these skills, and 71% of open source professionals agreeing these skills are in high demand. This is unsurprising with 77% of companies surveyed reporting they grew their use of cloud in the past year. Linux skills remain in high demand as well (61% of hiring managers) which is unsurprising considering how much Linux underpins cloud computing.
  • Cybersecurity concerns are mounting: Cybersecurity skills have the fourth biggest impact on hiring decisions, reported by 40% of employers, trailing only cloud, Linux and DevOps. Amongst professionals, 77% state they would benefit from additional cybersecurity training, demonstrating that although the importance of security is being recognized more, there is work to be done to truly secure technology deployments.
  • Companies are willing to spend more to avoid delaying projects: The most common way to close skills gaps currently according to hiring managers is training (43%), followed by 41% who say they hire consultants to fill these gaps, an expensive alternative and an increase from the 37% reporting this last year. This aligns with the only 16% who are willing to delay projects, demonstrating digital transformation activities are being prioritized even if they require costly consultants.

This year’s report is based on survey responses from 1,672 open source professionals and 559 respondents with responsibility for hiring open source professionals. Surveys were fielded online during the month of March 2022.

The full 10th Annual Open Source Jobs Report is available to download here for free.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

Media Contact:
Dan Brown
The Linux Foundation
415-420-7880
dbrown@linuxfoundation.org

The post Open Source Technology Careers Become More Lucrative as Open Source Software Becomes Dominant and Talent Gaps Persist appeared first on Linux Foundation.

Learn the Principles of DevSecOps in New, Free Training Course

Wed, 06/22/2022 - 18:35

In recent years, DevOps, which aligns incentives and the flow of work across the organization, has become the standard way of building software. By focusing on improving the flow of value, the software development lifecycle has become much more efficient and effective, leading to positive outcomes for everyone involved. However software development and IT operations aren’t the only teams involved in the software delivery process. With increasing cybersecurity threats, it has never been more important to unify cybersecurity and other stakeholders into an effective and united value stream aligned towards continuous delivery.

At the most basic level, there is nothing separating DevSecOps from the DevOps model. However, security, and a culture designed to put security at the forefront has often been an afterthought for many organizations. But in a modern world, as costs and concerns mount from increased security attacks, it must become more prominent. It is possible to provide continuous delivery, in a secure fashion. In fact, CD enhances the security profile. Getting there takes a dedication to people, culture, process, and lastly technology, breaking down silos and unifying multi-disciplinary skill sets. Organizations can optimize and align their value streams towards continuous improvement across the entire organization. 

To help educate and inform program managers and software leaders on secure and continuous software delivery, the Linux Foundation is releasing a new, free online training course, Introduction to DevSecOps for Managers (LFS180x) on the edX platform. Pre-enrollment is now open, though the course material will not be available to learners until July 20. The course focuses on providing managers and leaders with an introduction to the foundational knowledge required to lead digital organizations through their DevSecOps journey and transformation.

LFS180x starts off by discussing what DevSecOps is and why it is important. It then provides an overview of DevSecOps technologies and principles using a simple-to-follow “Tech like I’m 10” approach. Next, the course covers topics such as value stream management, platform as product, and engineering organization improvement, all driving towards defining Continuous Delivery and explaining why it is so foundational for any organization. The course also focuses on culture, metrics, cybersecurity, and agile contracting. Upon completion, participants will understand the fundamentals required in order to successfully transform any software development organization into a digital leader.

The course was developed by Dr. Rob Slaughter and Bryan Finster. Rob is an Air Force veteran and the CEO of Defense Unicorns, a company focused on secure air gap software delivery, he is the  former co-founder and Director of the Department of Defense’s DevSecOps platform team, Platform One, co-founder of the United States Space Force Space CAMP software factory, and current member of the Navy software factory Project Blue. Bryan is a software engineer and value stream architect with over 25 years experience as a software engineer  and leading development teams delivering highly available systems for large enterprises. He founded and led the Walmart DevOps Dojo which focused on a hands-on, immersive learning approach to helping teams solve the problem of “why can’t we safely deliver today’s changes to production today?” He is the co-author of “Modern Cybersecurity: Tales from the Near-Distant Future”, the author of the “5 Minute DevOps” blog, and one of the maintainers of MinimumCD.org. He is currently a value stream architect at Defense Unicorns at Platform One. 

Enroll today to start your journey to mastering DevSecOps practices on July 20!

The post Learn the Principles of DevSecOps in New, Free Training Course appeared first on Linux Foundation.

Free Training Course Teaches How to Secure a Software Supply Chain with Sigstore

Wed, 06/22/2022 - 18:28

Many software projects are not prepared to build securely by default, which is why the Linux Foundation and Open Source Security Foundation (OpenSSF) partnered with technology industry leaders to create Sigstore, a set of tools and a standard for signing, verifying and protecting software. Sigstore is one of several innovative technologies that have emerged to improve the integrity of the software supply chain, reducing the friction developers face in implementing security within their daily work.

To make it easier to use Sigstore’s toolkit to its full potential, OpenSSF and Linux Foundation Training & Certification are releasing a free online training course, Securing Your Software Supply Chain with Sigstore (LFS182x). This course is designed with end users of Sigstore tooling in mind: software developers, DevOps engineers, security engineers, software maintainers, and related roles. To make the best use of this course, you will need to be familiar with Linux terminals and using command line tools. You will also need to have intermediate knowledge of cloud computing and DevOps concepts, such as using and building containers and CI/CD systems like GitHub Actions, many of which can be learned through other free Linux Foundation Training & Certification courses.

Upon completing this course, participants will be able to inform their organization’s security strategy and build software more securely by default. The hope is this will help you address attacks and vulnerabilities that can emerge at any step of the software supply chain, from writing to packaging and distributing software to end users.

Enroll today and improve your organization’s software development cybersecurity best practices.

The post Free Training Course Teaches How to Secure a Software Supply Chain with Sigstore appeared first on Linux Foundation.

New Research from Snyk and The Linux Foundation Reveals Significant Security Concerns Resulting from Open Source Software Ubiquity

Wed, 06/22/2022 - 04:51
The State of Open Source Security Highlights Many Organizations Lacking Strategies to Address Application Vulnerabilities Arising from Code Reuse

BOSTON — June 21, 2022 — Snyk, the leader in developer security, and The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced the results of their first joint research report, The State of Open Source Security.

The results detail the significant security risks resulting from the widespread use of open source software within modern application development as well as how many organizations are currently ill-prepared to effectively manage these risks. Specifically, the report found:

  • Over four out of every ten (41%) organizations don’t have high confidence in their open source software security;
  • The average application development project has 49 vulnerabilities and 80 direct dependencies (open source code called by a project); and,
  • The time it takes to fix vulnerabilities in open source projects has steadily increased, more than doubling from 49 days in 2018 to 110 days in 2021.

“Software developers today have their own supply chains – instead of assembling car parts,  they are assembling code by patching together existing open source components with their unique code. While this leads to increased productivity and innovation, it has also created significant security concerns,” said Matt Jarvis, Director, Developer Relations, Snyk. “This first-of-its-kind report found widespread evidence suggesting industry naivete about the state of open source security today. Together with The Linux Foundation, we plan to leverage these findings to further educate and equip the world’s developers, empowering them to continue building fast, while also staying secure.”

“While open source software undoubtedly makes developers more efficient and accelerates innovation, the way modern applications are assembled also makes them more challenging to secure,” said Brian Behlendorf, General Manager, Open Source Security Foundation (OpenSSF). “This research clearly shows the risk is real, and the industry must work even more closely together in order to move away from poor open source or software supply chain security practices.” (You can read the OpenSSF’s blog post about the report here)

Snyk and The Linux Foundation will be discussing the report’s full findings as well as recommended actions to improve the security of open source software development during a number of upcoming events:

41% of Organizations Don’t Have High Confidence in Open Source Software Security

Modern application development teams are leveraging code from all sorts of places. They reuse code from other applications they’ve built and search code repositories to find open source components that provide the functionality they need. The use of open source requires a new way of thinking about developer security that many organizations have not yet adopted.

Further consider:

  • Less than half (49%) of organizations have a security policy for OSS development or usage (and this number is a mere 27% for medium-to-large companies); and,
  • Three in ten (30%) organizations without an open source security policy openly recognize that no one on their team is currently directly addressing open source security.
Average Application Development Project: 49 Vulnerabilities Spanning 80 Direct Dependencies

When developers incorporate an open source component in their applications, they immediately become dependent on that component and are at risk if that component contains vulnerabilities. The report shows how real this risk is, with dozens of vulnerabilities discovered across many direct dependencies in each application evaluated.

This risk is also compounded by indirect, or transitive, dependencies, which are the dependencies of your dependencies. Many developers do not even know about these dependencies, making them even more challenging to track and secure.

That said, to some degree, survey respondents are aware of the security complexities created by open source in the software supply chain today:

  • Over one-quarter of survey respondents noted they are concerned about the security impact of their direct dependencies;
  • Only 18% of respondents said they are confident of the controls they have in place for their transitive dependencies; and,
  • Forty percent of all vulnerabilities were found in transitive dependencies.
Time to Fix: More Than Doubled from 49 Days in 2018 to 110 Days in 2021

As application development has increased in complexity, the security challenges faced by development teams have also become increasingly complex. While this makes development more efficient, the use of open source software adds to the remediation burden. The report found that fixing vulnerabilities in open source projects takes almost 20% longer (18.75%) than in proprietary projects.

About The Report

The State of Open Source Security is a partnership between Snyk and The Linux Foundation, with support from OpenSSF, the Cloud Native Security Foundation, the Continuous Delivery Foundation and the Eclipse Foundation. The report is based on a survey of over 550 respondents in the first quarter of 2022 as well as data from Snyk Open Source, which has scanned more than 1.3B open source projects.

About Snyk

Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 1,500+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut, and Salesforce.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The post New Research from Snyk and The Linux Foundation Reveals Significant Security Concerns Resulting from Open Source Software Ubiquity appeared first on Linux Foundation.

Nephio Sees Rapid Growth as More Organizations Commit to Simplify Cloud Native Automation of Telecom Network Functions

Wed, 06/22/2022 - 00:00

SAN FRANCISCO—June 21, 2022—  Project Nephio, an open source initiative of partners across the telecommunications industry working towards true cloud-native automation , today announced rapid community growth and momentum.  

Since launching in April 2022 in partnership with Google Cloud, support has grown with 28 new organizations now part of the project (with over 50 contributing organizations), progress towards Technical Steering Committee (TSC) formation, and an upcoming Nephio Technical Summit, June 22-23, in Sunnyvale, Calif. New supporters include: A5G Networks, Alicon Sweden, Amdocs, ARGELA, CapGemini Technology, CIMI Corporation, Cohere Technologies, Coredge.io, CPQD, Deutsche Telekom, HPE, Keysight Technologies, KT, Kubermatic, Kydea, MantisNet, Matrixx, Minsait, Nabstract, Prodapt, Sandvine, SigScale, Spirent Communications, Telefónica, Tata Elxsi, TechMahidra, Verizon, Vodafone, Wind River, and Wipro. 

Nephio’s goal is to deliver carrier-grade, simple, open, Kubernetes-based cloud-native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments. Nephio enables faster onboarding of network functions to production including provisioning of underlying cloud infrastructure with a true cloud native approach, and reduces costs of adoption of cloud and network infrastructure.

“We are pleased to see Nephio experience such rapid growth in such a short time,” said Arpit Joshipura, general manager, Networking, Edge, and IoT, the Linux Foundation. “This is testament to the market need for open, collaborative initiatives that simplify network functions and cloud infrastructure across edge deployments.”

“We are heartened by the robust engagement from our growing Nephio community, and look forward to continuing to work together to set a new open standard for cloud-native networks to advance automation, network function deployment, and the management of user journeys,” said Gabriele Di Piazza, Senior Director, Telecom Product Management, Google Cloud.

Developer collaboration is underway with the Technical Steering Committee formation in progress. And the Nephio technical community will gather in-person and virtually for the first Nephio Technical Summit, June 22-23 in Sunnyvale, Calif. The goal is to discuss strategy, technology enhancements, roadmap, and operational aspects of cloud native automation in the Telecommunication world. More details, including how to register, are available here: https://nephio.org/events/

More information about Nephio is available at www.nephio.org

Support from contributing organizations

A5G Networks

“A5G Networks is a leader and innovator in autonomous and distributed mobile core network software over hybrid and multi-cloud. Our unique IP helps realize significant savings in capital and operating expenditures, reduces energy requirements, improves quality of user experience and catalyze adoption of new business models. A5G Networks is excited to join the Nephio initiative for intent based automation and unlock the true potential of 5G networks,” said Kaitki Agarwal, founder, president and CTO of A5G Networks, Inc.

Amdocs

“Amdocs is excited to join the Nephio community and accelerate the Telecom industry’s journey towards a cloud-native, Kubernetes-based, automation and orchestration solutions. As a leader in telco automation and a founding member of Linux  Foundation’s ONAP and EMCO projects, Amdocs is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said  Eyal Shaked, General Manager, Open Network PBU, Amdocs. 

Capgemini

“Capgemini is excited to join the Nephio community and join the Nephio working groups to facilitate the deployments of telecom operators by moving the Telecom industries towards a cloud-native platform and provide the automation and orchestration solutions with the help of Nephio. Capgemini is an expert in O-RAN standards and has FAPI compliant O-CU and O-DU implementations. Capgemini is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said Sandip Sarkar, senior director, CTO Organization, Capgemini.

CIMI Corporation

“The Nephio project promises to provide an open-source implementation of network operator service lifecycle automation based on the cloud-standard Kubernetes orchestration platform.  That’s absolutely critical for the convergence of network and cloud software,” said Tom Nolle, president, CIMI Corporation. 

Coreedge.io

Arif Khan, CEO, Coredge.iom said, “Bringing agility is delivering services and centrally managing the geographically distributed cloud, keeping cost in control is the key focus right now for operators. Nephio project is meant to achieve this with Kubernetes-based cloud-native intent automation and automation templates. We are glad to contribute to Nephio with our learnings in management of multi-cloud and distributed edge using intent driven automation inside the Coredge.”

Deutsche Telekom

“Large-scale automation is pivotal on our Software Telco journey. It is important that we work together as an industry on standards that will enable and simplify the cloud native automation of network functions. And we believe the Nephio project can play a fundamental role to speed up this process,” said Jochen Appel, VP Network Automation, Deutsche Telekom.

KT

“Cloud native is a next step on the journey of telcos’ path to successful digital transformation. Also the automated management to enable multi-vendor support and reduce cost by efficiency and agility is a key factor for operation of the cloud based network systems. The project Nephio will help open, wide, and easy adoption of such infrastructure. By co-working with partners in the project, we look forward to solving the interworking issues among multi-vendors and building up the efficient and agile orchestrated management system easily,” said Jongsik Lee, senior vice president, head of Infrastructure DX R&D Center, KT.

MantisNet

“MantisNet supports the Nephio initiative, specifically realizing the vision of autonomous networks. The Nephio project is complementary with the kinds of full-stack, end-to-end, programmable visibility, powered by an open, standards-based, event-driven, composable architecture that we are developing for a broad range of new and emerging use-cases to help ensure the secure and reliable operation of cloud-native 5G applications,”said  Peter Dougherty, CEO MantisNet. 

Matrixx Software

“Continued advancements in the automation of distributed Cloud Native Network Functions will be critical to delivering on the promises of new differentiated 5G services, and key to new industry revenue models,” said Marc Price, CTO, Matrixx Software. 

Minsait

“As a company helping Telcos to onboard their 5G network functions, we are aware of the current challenges they are facing. Nephio is a key initiative to fulfill the promises of truly cloud native deployment and operation that specifically addresses the unique pain points  of the Telco industry,” said Francisco Rodríguez, head of network virtualization at Minsait. 

Nabstract.io

“Harmonization and availability of common practices that facilitate intent driven automation for deployment and management of infrastructure and cloud native Network Functions will boost the consumption of 5G connectivity capabilities across market verticals through abstracted open APIs,” said Vaibhav Mehta, Founder, Nabstract.io.

Proadapt

“Prodapt is the leading SI for connectedness industry with a laser focus on software intensive networks. Together as a key contributor to the Project Nephios, we will jointly accelerate TelCo’s journey towards becoming a TechCo by co-innovating, -building, -deploying, and -operating distributed multi-cloud network functions. We believe our collaboration would set the foundation of a fully automated intent driven cloud-native networks supporting differentiated 5G & distributed edge experience,” said Rajiv Papneja, SVP & global head, Cloud & Network Services, Prodapt.

Sandvine

“Sandvine Application and Network Intelligence solutions provide machine learning-based 5G analytics over hybrid cloud, multicloud, and edge deployments, empowering service-providers and enterprise customers to analyze, optimize, and monetize application experiences. Sandvine is proud to be a part of the Nephio initiative for intent-based automation, a prelude to Network-as-a-Service offerings that will scale autonomously, even when comprised of different vendors’ Infrastructure/Platform/Software-aaS components,” said Samir Marwaha, Chief Strategy Officer, Sandvine.

SigScale

“SigScale believes Nephio could be instrumental in achieving a management continuum across multi-cloud, multi-vendor networks,” said Vance Shipley, CEO, SigScale.

Vodafone

“Building, deploying, and operating Telco workloads across distributed cloud environments is complex, so it is important to adopt cloud native best practices as we evolve, to enable us to achieve our goals for agility, automation, and optimisation,” said Tom Kivlin, principal Cloud Architect, Vodafone. “Project Nephio presents a great opportunity to drive the cloud native orchestration of our networks.  We look forward to working with our partners and the Nephio community to further develop and accelerate the simplification of network function orchestration.” 

Wind River

“As active supporters and contributors of key telco cloud-native open source projects such as StarlingX and the O-RAN Alliance, Wind River is excited to join Nephio. Nephio’s mission of simplifying the deployment and management of multi-vendor cloud infrastructure across large scale deployments is directly aligned with our strategy,” said Gil Hellmann, vice president, Telecom Solutions Engineering, Wind River. 

About Nephio

More information can be found at www.nephio.org.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

#####

The post Nephio Sees Rapid Growth as More Organizations Commit to Simplify Cloud Native Automation of Telecom Network Functions appeared first on Linux Foundation.

TODO Group Announces 2022 OSPO Survey

Tue, 06/21/2022 - 21:00

The TODO Group, together with Linux Foundation Research, LF Training & Certification, api7.ai, Futurewei, Ovio, Salesforce, VMware, and X-Labs, is conducting a survey as part of a research project on the prevalence and outcomes of open source programs among different organizations across the globe. 

Open source program offices (OSPOs) help set open source strategies and improve an organization’s software development practices. Since 2018, the TODO Group has conducted surveys to assess the state of open source programs across the industry. Today, we are pleased to announce the launch of the 2022 edition featuring additional questions to add value to the community.

“The TODO Group was created to foster vendor-neutral best practices in open source usage and OSPO cultivation. Our annual OSPO survey is one of the best tools we have to understand how open source programs and initiatives are run at organizations worldwide, and to gain insight to inform existing and potential OSPO leaders of the nuances of fostering professional open source programs.”

Chris Aniszczyk, co-founder TODO Group and CTO, CNCF

“Thanks in part to the great community contributions received this year from open source folks engaged in OSPO-related topics, the OSPO 2022 Survey goes a step further to get insights and inform based on the most actual OSPO needs across regions.”

Ana Jimenez Santamaria, OSPO Program Manager, TODO Group

The survey will generate insights into the following areas, including:

  • The extent of adoption of open source programs and initiatives 
  • Concerns around the hiring of open source developers 
  • Perceived benefits and challenges of open source programs
  • The impact of open source on organizational strategy

The survey will be available in English, Chinese, and Japanese. Please participate now; we intend to close the survey in mid-July. Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be published in the final results.

To take the 2022 OSPO Survey, click the button below:

Take Survey [English] Take Survey [接受调查] Take Survey [調査する]

The post TODO Group Announces 2022 OSPO Survey appeared first on Linux Foundation.

Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure

Tue, 06/21/2022 - 18:00
Data Processing and Infrastructure Processing Units – DPU and IPU – are changing the way enterprises deploy and manage compute resources across their networks; OPI will nurture an ecosystem to enable easy adoption of these innovative technologies 

SAN FRANCISCO, Calif.,  – June 21, 2022 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the new Open Programmable Infrastructure (OPI) Project. OPI will foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies. OPI is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and datacenter across DevOps, SecOps and NetOps. 

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users. 

“When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them,” said Mike Dolan, senior vice president of Projects at the Linux Foundation. “DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies.

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto and more because of their flexibility in managing resources across networking, compute, security and storage domains. Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers’ application workloads and scaling requirements.

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.  The project intends to:

  • Define DPU and IPU, 
  • Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions, 
  • Enable the creation of a rich open source application ecosystem,
  • Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and, 
  • Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation. IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. 

In addition, NVIDIA DOCA , an open source software development framework for NVIDIA’s BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms. 

For more information visit: https://opiproject.org; start contributing here: https://github.com/opiproject/opi.

Founding Member Comments

Geng Lin, EVP and Chief Technology Officer, F5

“The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments.”

Patricia Kummrow, CVP and GM, Ethernet Products Group, Intel

Intel is committed to open software to advance collaborative and competitive ecosystems and is pleased to be a founding member of the Open Programmable Infrastructure project, as well as fully supportive of the Infrastructure Processor Development Kit (IPDK) as part of OPI. We look forward to advancing these tools, with the Linux Foundation, fulfilling the need for a programmable infrastructure across cloud, data center, communication and enterprise industries making it easier for developers to accelerate innovation and advance technological developments.

Ram Periakaruppan, VP and General Manager, Network Test and Security Solutions Group, Keysight Technologies 

“Programmable infrastructure built with DPUs/IPUs enables significant innovation for networking, security, storage and other areas in disaggregated cloud environments. As a founding member of the Open Programmable Infrastructure Project, we are committed to providing our test and validation expertise as we collaboratively develop and foster a standards-based open ecosystem that furthers infrastructure development, enabling cloud providers to maximize their investment.”

Cary Ussery, Vice President, Software and Support, Processors, Marvell

Data center operators across multiple industry segments are increasingly incorporating DPUs as an integral part of their infrastructure processing to offload complex workloads from general purpose to more robust compute platforms. Marvell strongly believes that software standardization in the ecosystem will significantly contribute to the success of workload acceleration solutions. As a founding member of the OPI Project, Marvell aims to address the need for standardization of software frameworks used in provisioning, lifecycle management, orchestration, virtualization and deployment of workloads.

Kevin Deierling, vice president of Networking at NVIDIA 

“The fundamental architecture of data centers is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications. NVIDIA is a founding member of the Linux Foundation’s Open Programmable Infrastructure Project to continue pushing the boundaries of networking performance and accelerated data center infrastructure while championing open standards and ecosystems.”

Erin Boyd, director of emerging technologies, Red Hat

“As a founding member of the Open Programmable Infrastructure project, Red Hat is committed to helping promote, grow and collaborate on the emergent advantage that new hardware stacks can bring to the cloud-native community, and we believe that the formalization of OPI into the Linux Foundation is an important step toward achieving this in an open and transparent fashion. Establishing an open standards-based ecosystem will enable us to create fully programmable infrastructure, opening up new possibilities for better performance, consumption, and the ability to more easily manage unique hardware at scale.”

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

 

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Marvell Disclaimer: This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this press release. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.

Media Contact
Carolyn Lehman
The Linux Foundation
clehman@linuxfoundation.org

Click here to add your own text

The post Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure appeared first on Linux Foundation.

Pages