Open-source News

Linux 5.19 Makes Its Signature Verification Code FIPS Compliant

Phoronix - Wed, 06/22/2022 - 17:00
Merged yesterday into Linux 5.19 as a post merge window change is making the kernel's signature verification code FIPS compliant...

AMDVLK 2022.Q2.3 Vulkan Driver Released With Some Performance Optimizations

Phoronix - Wed, 06/22/2022 - 16:22
AMD today released a new update to AMDVLK, their official open-source Radeon Vulkan driver for Linux systems that is derived from their internal Vulkan driver sources while plumbed to use the open-source LLVM AMDGPU shader compiler back-end. For Linux gamers this driver doesn't remain as popular as Mesa's RADV but the update today does deliver on some game performance optimizations...

8 Best MySQL/MariaDB GUI Tools for Linux Administrators

Tecmint - Wed, 06/22/2022 - 15:00
The post 8 Best MySQL/MariaDB GUI Tools for Linux Administrators first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

MySQL is one of the most widely-used open-source relational database management systems (RDBMS), that has been around for a long time. It is an advanced, fast, reliable, scalable, and easy-to-use RDBMS intended for mission-critical,

The post 8 Best MySQL/MariaDB GUI Tools for Linux Administrators first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Manage your Rust toolchain using rustup

opensource.com - Wed, 06/22/2022 - 15:00
Manage your Rust toolchain using rustup Gaurav Kamathe Wed, 06/22/2022 - 03:00 2 readers like this 2 readers like this

The Rust programming language is becoming increasingly popular these days, used and loved by hobbyists and corporations alike. One of the reasons for its popularity is the amazing tooling that Rust provides making it a joy to use for developers. Rustup is the official tool used to manage Rust tooling. Not only can it be used to install Rust and keep it updated, it also allows you to seamlessly switch between the stable, beta, and nightly Rust compilers and tooling. This article will introduce you to rustup and some common commands to use.

Default Rust installation method

If you want to install Rust on Linux, you can use your package manager. On Fedora or CentOS Stream you can use this, for example:

$ sudo dnf install rust cargo

This provides a stable version of the Rust toolchain, and works great if you are a beginner to Rust and want to try compiling and running simple programs. However, because Rust is a new programming language it changes fast and a lot of new features are frequently added. These features are part of the nightly and later beta version of the Rust toolchain. To try out these features you need to install these newer versions of the toolchain, without affecting the stable version on the system. Unfortunately, your distro’s package manager can’t help you here.

Installing Rust toolchain using rustup

To get around the above issues, you can download an install script:

$ curl --proto '=https' --tlsv1.2 \
-sSf https://sh.rustup.rs > sh.rustup.rs

Inspect it, and then run it. It doesn’t require root privileges and installs Rust accordingly to your local user privileges:

$ file sh.rustup.rs
sh.rustup.rs: POSIX shell script, ASCII text executable
$ less sh.rustup.rs
$ bash sh.rustup.rs

Select option 1 when prompted:

1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
> 1

After installation, you must source the environment variables to ensure that the rustup command is immediately available for you to use:

$ source $HOME/.cargo/env

Verify that the Rust compiler (rustc) and Rust package manager (cargo) are installed:

$ rustc --version
$ cargo --version

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java See installed and active toolchains

You can view the different toolchains that were installed and which one is the active one using the following command:

$ rustup showSwitch between toolchains

You can view the default toolchain and change it as required. If you’re currently on a stable toolchain and wish to try out a newly introduced feature that is available in the nightly version you can easily switch to the nightly toolchain:

$ rustup default
$ rustup default nightly

To see the exact path of the compiler and package manager of Rust:

$ rustup which rustc
$ rustup which cargoChecking and Updating the toolchain

To check whether a new Rust toolchain is available:

$ rustup check

Suppose a new version of Rust is released with some interesting features, and you want to get the latest version of Rust. You can do that with the update subcommand:

$ rustup updateHelp and documentation

The above commands are more than sufficient for day-to-day use. Nonetheless, rustup has a variety of commands and you can refer to the help section for additional details:

$ rustup --help

Rustup has an entire book on GitHub that you can use as a reference. All the Rust documentation is installed on your local system, which does not require you to be connected to the Internet. You can access the local documentation which includes the book, standard library, and so on:

$ rustup doc
$ rustup doc --book
$ rustup doc --std
$ rustup doc --cargo

Rust is an exciting language under active development. If you’re interested in where programming is headed, keep up with Rust!

Rustup can be used to install Rust and keep it updated. It also allows you to seamlessly switch between the stable, beta, and nightly Rust compilers and tooling.

Image by:

Opensource.com

Rust Programming What to read next 5 Rust tools worth trying on the Linux command line 5 signs you might be a Rust programmer This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A site reliability engineer's guide to change management

opensource.com - Wed, 06/22/2022 - 15:00
A site reliability engineer's guide to change management Robert Kimani Wed, 06/22/2022 - 03:00 2 readers like this 2 readers like this

In my previous article, I wrote about incident management (IM), an important component of site reliability engineering. In this article, I focus on change management (CM). Why is there a need to manage change? Why not simply just have a free-for-all where anyone can make a change at any time?

There are three tenets of effective CM. This gives you a forecast framework for your CM strategy:

  • Rolling out changes progressively: There's a difference between progressive rollouts in which you deploy changes in stages, and doing it all at once. You get to find out that even though progressive change may look good on paper,there are pitfalls to avoid.
  • Detecting problems with changes: Monitoring is extremely critical for your CM to work. I discuss and look at examples of how to setup effective monitoring to ensure that you can detect problems and make changes as quickly as possible.
  • Rollback procedures: How can you effectively rollback when things go wrong?
Why manage change?

It's estimated that 75% of production outages are due to changes. Scheduled and approved changes that we all perform. This number is staggering and only requires you to get on top of CM to ensure that everything is in order before the change is attempted. The primary reason for these staggering numbers is that there are inherent problems with changes.

Infrastructure and platforms are rapidly evolving. Not so long ago, infrastructure was not as complex, and it was easy to manage. For example an organization could have a few servers, where they ran an application server, web-servers, and database servers. But lately the infrastructure and platform are as complex as ever.

It is impossible to analyze every interconnection and dependency after the fact caused by the numerous sub-systems involved. For instance an application owner may not even know a dependency of an external service until it actually breaks. Even if the application team is aware of the dependency, they may not know all of the intricacies and all the different ways the remote service will respond due to their change.

You cannot possibly test for unknown scenarios. This goes back to the complexity of the current infrastructure and platforms. It will be cost prohibitive in terms of the time you spend to test each and every scenario before you actually apply a change. Whenever you make a change in your existing production environment, whether it's a configuration change or a code change, the truth is that, you are at high risk of creating an outage. So how do we handle this problem? Let's take a peek at the three tenets of an effective CM system.

More DevOps resources What is DevOps? The ultimate DevOps hiring guide DevOps monitoring tools guide A guide to implementing DevSecOps Download the DevOps glossary eBook: Ansible for DevOps Latest DevOps articles 3 tenets of an effective change management system for SREs

Automation is the foundational aspect of effective CM. Automation flows across the entire process of CM. This involves a few things:

  • Progressive rollouts: Instead of doing one big change, the progressive rollouts mechanism allows you to implement change in stages, thereby reducing the impact to the user-base if something goes wrong. This attribute is critical especially if your user-base is large, for instance – web-scale companies.
  • Monitoring: You need to quickly and accurately detect any issue with changes. Your monitoring system should be able to reveal the current state of your application and service without any considerable lag in time.
  • Safe rollback: The CM system should rollback quickly and safely when needed. Do not attempt any change in your environment without having a bulletproof rollback plan.
Role of automation

Many of you are aware of the concept of automation, however a lot of organizations lack automation. To increase the velocity of releases, which is an important part of running an Agile organization, manual operations must be eliminated. This can be accomplished by using Continuous Integration and Continuous Delivery but it is only effective when most of the operations are fully automated. This naturally eliminates human errors due to fatigue and carelessness. By virtue, auto-scaling which is an important function of cloud-based applications requires no manual intervention. This process needs to be completely automated.

Progressive rollouts for SREs: deploying changes progressively

Changes to configuration files and binaries have serious consequences, in other words when you make a change to an existing production system, you are at serious risk of impacting the end-user experience.

For this reason, when you deploy changes progressively instead of all at once you can reduce the impact when things go wrong. If we need to roll back, the effort is generally smaller when the changes are done in a progressive manner. The idea here is, that you would start your change with a smaller set of clients. If you find an issue with the change, you can rollback the change immediately because the size of the impact is small at that point.

There is an exception to the progressive rollout, you can rollout the change globally all at once if it is an emergency fix and it is warranted to do so.

Pitfalls to progressive rollouts

Rollout and rollback can get complex because you are dealing with multiple stages of a release. Lack of required traffic can undermine the effectiveness of a release. Especially if in the initial stages you are targeting a smaller set of clients in your rollout. The danger is that, you may prematurely sign off on a release based on a smaller set of clients. It also releases a pipline where you run one script with multiple stages

Releases can get much longer compared to one single (big) change. In a truly web-scale application that is scattered across the globe, a change can take several days to fully rollout, which can be a problem in some instances.

Documentation is important. Especially when a stage takes a long time and it requires multiple teams to be involved to manage the change. Everything must be documented in detail in case a rollback or a roll forward is warranted.

Due to these pitfalls, it is advised that you take a deeper look into your organization change rollout strategy. While progressive rollout is efficient and recommended, if your application is small enough and does not require frequent changes, a change all at once is the way to go. By doing it all at once, you have a clean way to rollback if there is a need to do so.

High level overview of progressive rollout

Once the code is committed and merged, we start a "Canary release," where canaries are the test subjects. Keep in mind that they are not a replacement for complete automated testing. The name "canary" comes from the early days of mining, when a canary bird was used to detect whether a mine contained poisonous gas before humans entering.

After the test, a small set of clients are used to rollout our changes and see how things go. Once the "canaries" are signed off, go to the next stage, which is the "Early Adaptors release." This is a slightly bigger set of clients you use to do the rollout. Finally, if the "Early Adaptors" are signed off, move to the biggest pack of the bunch: "All users."

Image by:

(Robert Kimani, CC BY-SA 4.0)

"Blast radius" refers to the size of the impact if something goes wrong. It is the smallest when we do the canary rollout and actually the biggest when we rollout to all users.

Options for progressive rollouts

A progressive rollout is either dependent on an application or an organization. For global applications, a geography-based method is an option. For instance you can choose to release to the Americas first, followed by Europe and regions of Asia. When your rollout is dependent on departments within an organization, you can use the classic progressive rollout model, used by many web-scale companies. For instance, you could start off with "Canaries", HR, Marketing, and then customers.

It's common to choose internal departments as the first clients for progressive rollouts, and then gradually move on to the external users.

You can also choose a size-based progressive rollout. Suppose you have one-thousand servers running your application. You could start off with 10% in the beginning, then pump up the rollout to 25%, 50%, 75%, and finally 100%. In this way, you can only affect a smaller set of servers as you advance through your progressive rollout.

There are periods where an application must run 2 different versions simultaneously. This is something you cannot avoid in progressive rollout situations.

Binary and configuration packages

There are three major components of a system: binary: (software), data (for instance, a database), and configuration (the parameters that govern the behavior of an application).

It's considered best practice to keep binary and configuration files separate from one another. You want to use version controlled configuration. Your configurations must be "hermetic." At any given time, when the configuration is derived by the application, it's the same regardless of when and where the configurations are derived. This is achieved by treating configuration as code.

Monitoring for SREs

Monitoring is a foundation capability of an SRE organization. You need to know if something is wrong with your application that affects the end-user experience. In addition, your monitoring should help you identify the root cause.

The primary functions of monitoring are:

  • Provides visibility into service health.
  • Allows you to create alerts based on a custom threshold.
  • Analyzes trends and plan capacity.
  • Provides detailed insight into various subsystems that make up your application or service.
  • Provides Code-level metrics to understand behavior.
  • Makes use of visualization and reports.
Data Sources for Monitoring

You can monitor several aspects of your environment. These include:

  • Raw logs: Generally unstructured generated from your application or a server or network devices.
  • Structured event logs: Easy to consume information. For example Windows Event Viewer logs.
  • Metrics: A numeric measurement of a component.
  • Distributed tracing: Trace events are generally either created automatically by frameworks, such as open telemetry, or manually using your own code.
  • Event introspection: Helps to examine properties at runtime at a detailed level.

When choosing a monitoring tool for your SRE organization, you must consider what's most important.

Speed

How fast can you retrieve and send data into the monitoring system?

  • How fresh the data should be? The fresher the data, the better. You don't want to be looking at data that's 2 hours old. You want the data to be as real-time as possible.
  • Ingesting data and alerting of real-time data can be expensive. You may have to invest in a platform like Splunk or InfluxDB or ElasticSearch to fully implement this.
  • Consider your service level objective (SLO) – to determine how fast the monitoring system should be. For instance, if your SLO is 2 hours, you do not have to invest in systems that process machine data in real-time.
  • Querying vast amounts of data can be inefficient. You may have to invest in enterprise platforms if you need very fast retrieval of data.
Resolution check

What is the granularity of the monitoring data?

  • Do you really need to record data every second? The recommended way is to use aggregation wherever possible.
  • Use sampling if it makes sense for your data.
  • Metrics are suited for high-resolution monitoring instead of raw log files.
Alerting

What alert capabilities can the monitoring tool provide?

Ensure the monitoring system can be integrated with other event processing tools or third party tools. For instance, can your monitoring system page someone in case of emergency? Can your monitoring system integrate with a ticketing system?

You should also classify the alerts with different severity levels. You may want to choose a severity level of three for a slow application versus a severity level of one for an application that is not available. Make sure the alerts can be easily suppressed to avoid alert flooding. Email or page flooding can be very distracting to the On-Call experience. There must be an efficient way to suppress the alerts.

[ Read next: 7 top Site Reliability Engineer (SRE) job interview questions ]

User interface check

How versatile is it?

  • Does your monitoring tool provide feature-rich visualization tools?
  • Can it show time series data as well as custom charts effectively?
  • Can it be easily shared? This is important because you may want to share what you found not only with other team members but you may have to share certain information with leadership.
  • Can it be managed using code? You don't want to be a full-time monitoring administrator. You need to be able to manage your monitoring system through code.
Metrics

Metrics may not be efficient in identifying the root cause of a problem. It can tell what's going on in the system, but it can't tell you why it's happening. They are suitable for low-cardinality data, when you do not have millions of unique values in your data.

  • Numerical measurement of a property.
  • A counter accompanied by attributes.
  • Efficient to ingest.
  • Efficient to query.
  • It may not be efficient in identifying the root cause. Metrics can tell what's going on in the system but it won't be able to tell you why that's happening.
  • Suitable for low-cardinality data – When you do not have millions of unique values in your data.
Logs

Raw text data is usually arbitrary text filled with debug data. Parsing is generally required to get at the data. Data retrieval and recall is slower than using metrics. Raw text data is useful to determine the root causes of many problems and there are no strict requirements in terms of the cardinaltiy of data.

 

  • Arbitrary text, usually filled with debug data.
  • Generally parsing is required.
  • Generally slower than metrics, both to ingest and to retrieve.
  • Most of the times you will need raw logs to determine the root cause.
  • No strict requirements in-terms of cardinality of data.

You should use metrics because they can be assimilated, indexed and retrieved at a fast pace compared to logs. Analyzing with metrics and logs are fast, so you can give an alert fast. In contrast, logs are actually required for root cause analysis (RCA).

4 signals to monitor

There's a lot you can monitor, and at some point you have to decide what's important.

  • Latency: What are the end-users experiencing when it comes to responsiveness from your application.
  • Errors: This can be both Hard errors such as an HTTP:500 internal server error or Soft errors, which could refer to a functionality error. It could also mean a slow response time of a particular component within your application.
  • Traffic: Refers to the total number of requests coming in.
  • Saturation: Generally occurs in a component or a resource when it cannot handle the load anymore.
Monitoring resources

Data has to be derived from somewhere. Here are common resources used in building a monitoring system:

  • CPU: In some cases CPU utilization can indicate an underlying problem.
  • Memory: Application and System memory. Application memory could be the Java heap size in a Java application.
  • Disk I/O: Many applications are heavy I/O dependent, so it's important to monitor disk performance.
  • Disk volume: Monitors the sizes of all your file-systems.
  • Network bandwidth: It's critical to monitor the network bandwidth utilized by your application. This can provide insight into eliminating performance bottlenecks.
3 best practices for monitoring for SREs

Above all else, remember the three best practices for an effective monitoring system in your SRE organization:

  1. Configuration as code: Makes it easy to deploy monitoring to new environments.
  2. Unified dashboards: Converge to a unified pattern that enables reuse of the dashboards.
  3. Consistency: Whatever monitoring tool you use, the components that you create within the monitoring tool should follow a consistent naming convention.
Rolling back changes

To minimize user impact when change did not go as expected, you should buy time to fix bugs. With fine-grained rollback, you are able to rollback only a portion of your change that was impacted, thus minimizing overall user impact.

If things don't go well during your "canary" release, you may want to roll back your changes. When combined with progressive rollouts, it's possible to completely eliminate user impact when you have a solid rollback mechanism in place.

Rollback fast and rollback often. Your rollback process will become bulletproof over time!

Mechanics of rollback

Automation is key. You need to have scripts and processes in place before you attempt a rollback. One of the ways application developers rollback a change is to simply toggle flags as part of the configuration. A new feature in your application can be turned on and off based on simply switching a flag.

The entire rollback could be a configuration file release. In general, a rollback of the entire release is more preferred than a partial rollback. Use a package management system with version numbers and labels that are clearly documented.

A rollback is still a change, technically speaking. You have already made a change and you are reverting it back. Most cases entail a scenario that was not tested before so you have to be cautious when it comes to rollbacks.

Roll forward

With roll forward, instead of rolling back your changes, you release a quick fix "Hot Fix," an upgraded software that includes the fixes. Rolling forward may not always be possible. You might have to run the system in degraded status until an upgrade is available so the "roll forward is fully complete." In some cases, rolling forward may be safer than a rollback, especially when the change involves multiple sub-systems.

Change is good

Automation is key. Your builds, tests, and releases should all be automated.

Use "canaries" for catching issues early, but remember that "canaries" are not a replacement for automated testing.

Monitoring should be designed to meet your service level objectives. Choose your monitoring tools carefully. You may have to deploy more than one monitoring system.

Finally, there are three tenets of an effective CM system:

  1. Progressive rollout: Strive to do your changes in a progressive manner.
  2. Monitoring: A foundational capability for your SRE teams.
  3. Safe and fast rollbacks: Do this with processes and automation in place which increase confidence in your SRE organization functionality.

In the next article, the third part of this series, I will cover some important technical topics when it comes to SRE best practices. These topics will include the Circuit Breaker Pattern, self healing systems, distributed consensus, effective load balancing, autoscaling, and effective health check.

The three core tenets of effective change management for SREs are progressive rollouts, monitoring, and safe and fast rollbacks.

Image by:

Opensource.com

Careers DevOps What to read next What you need to know about site reliability engineering How SREs can achieve effective incident response This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to Install Software Packages via YUM/DNF Using RHEL 9 DVD

Tecmint - Wed, 06/22/2022 - 12:00
The post How to Install Software Packages via YUM/DNF Using RHEL 9 DVD first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Linux has always been known for its flexibility and installing packages from ISO is one of them. There are many use cases when a user wants to use ISO/DVD for downloading packages. In this

The post How to Install Software Packages via YUM/DNF Using RHEL 9 DVD first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Ericsson and Red Hat empower service providers to build multi-vendor networks

Red Hat News - Wed, 06/22/2022 - 12:00

Service providers are uniquely positioned to serve up a new generation of immersive, personalized, ultra-reliable experiences, applications and solutions. Ericsson and Red Hat are expanding their collaboration for validation of network functions and platforms to enable service providers to bring their next generation services to market faster in a multi-vendor scenario.

Chrome 103 Released With Deflaw-Raw Compression Format, Local Font Access

Phoronix - Wed, 06/22/2022 - 07:07
Google today released Chrome 103 as the newest monthly feature update to its cross-platform web browser...

New Research from Snyk and The Linux Foundation Reveals Significant Security Concerns Resulting from Open Source Software Ubiquity

The Linux Foundation - Wed, 06/22/2022 - 04:51
The State of Open Source Security Highlights Many Organizations Lacking Strategies to Address Application Vulnerabilities Arising from Code Reuse

BOSTON — June 21, 2022 — Snyk, the leader in developer security, and The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced the results of their first joint research report, The State of Open Source Security.

The results detail the significant security risks resulting from the widespread use of open source software within modern application development as well as how many organizations are currently ill-prepared to effectively manage these risks. Specifically, the report found:

  • Over four out of every ten (41%) organizations don’t have high confidence in their open source software security;
  • The average application development project has 49 vulnerabilities and 80 direct dependencies (open source code called by a project); and,
  • The time it takes to fix vulnerabilities in open source projects has steadily increased, more than doubling from 49 days in 2018 to 110 days in 2021.

“Software developers today have their own supply chains – instead of assembling car parts,  they are assembling code by patching together existing open source components with their unique code. While this leads to increased productivity and innovation, it has also created significant security concerns,” said Matt Jarvis, Director, Developer Relations, Snyk. “This first-of-its-kind report found widespread evidence suggesting industry naivete about the state of open source security today. Together with The Linux Foundation, we plan to leverage these findings to further educate and equip the world’s developers, empowering them to continue building fast, while also staying secure.”

“While open source software undoubtedly makes developers more efficient and accelerates innovation, the way modern applications are assembled also makes them more challenging to secure,” said Brian Behlendorf, General Manager, Open Source Security Foundation (OpenSSF). “This research clearly shows the risk is real, and the industry must work even more closely together in order to move away from poor open source or software supply chain security practices.” (You can read the OpenSSF’s blog post about the report here)

Snyk and The Linux Foundation will be discussing the report’s full findings as well as recommended actions to improve the security of open source software development during a number of upcoming events:

41% of Organizations Don’t Have High Confidence in Open Source Software Security

Modern application development teams are leveraging code from all sorts of places. They reuse code from other applications they’ve built and search code repositories to find open source components that provide the functionality they need. The use of open source requires a new way of thinking about developer security that many organizations have not yet adopted.

Further consider:

  • Less than half (49%) of organizations have a security policy for OSS development or usage (and this number is a mere 27% for medium-to-large companies); and,
  • Three in ten (30%) organizations without an open source security policy openly recognize that no one on their team is currently directly addressing open source security.
Average Application Development Project: 49 Vulnerabilities Spanning 80 Direct Dependencies

When developers incorporate an open source component in their applications, they immediately become dependent on that component and are at risk if that component contains vulnerabilities. The report shows how real this risk is, with dozens of vulnerabilities discovered across many direct dependencies in each application evaluated.

This risk is also compounded by indirect, or transitive, dependencies, which are the dependencies of your dependencies. Many developers do not even know about these dependencies, making them even more challenging to track and secure.

That said, to some degree, survey respondents are aware of the security complexities created by open source in the software supply chain today:

  • Over one-quarter of survey respondents noted they are concerned about the security impact of their direct dependencies;
  • Only 18% of respondents said they are confident of the controls they have in place for their transitive dependencies; and,
  • Forty percent of all vulnerabilities were found in transitive dependencies.
Time to Fix: More Than Doubled from 49 Days in 2018 to 110 Days in 2021

As application development has increased in complexity, the security challenges faced by development teams have also become increasingly complex. While this makes development more efficient, the use of open source software adds to the remediation burden. The report found that fixing vulnerabilities in open source projects takes almost 20% longer (18.75%) than in proprietary projects.

About The Report

The State of Open Source Security is a partnership between Snyk and The Linux Foundation, with support from OpenSSF, the Cloud Native Security Foundation, the Continuous Delivery Foundation and the Eclipse Foundation. The report is based on a survey of over 550 respondents in the first quarter of 2022 as well as data from Snyk Open Source, which has scanned more than 1.3B open source projects.

About Snyk

Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 1,500+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut, and Salesforce.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The post New Research from Snyk and The Linux Foundation Reveals Significant Security Concerns Resulting from Open Source Software Ubiquity appeared first on Linux Foundation.

PCI Express 7.0 Specification Announced - Hitting 128 GT/s In 2025

Phoronix - Wed, 06/22/2022 - 02:30
The PCI SIG today announced the PCI Express 7.0 specification that doubles the data rate to 128 GT/s and should be released to members in 2025...

Pages