Open-source News

EROFS Gets Low-Latency Decompression For Much Better Performance

Phoronix - Mon, 02/20/2023 - 19:39
The EROFS file-system updates for Linux 6.3 include introducing a new option for per-CPU KThreads to provide low-latency decompression for speeding up use of compressed EROFS file-systems on Android devices...

Qt Safe Renderer 2.0 Released To Enhance Functional Safety UIs

Phoronix - Mon, 02/20/2023 - 19:29
The Qt Group has released Qt Safe Renderer 2.0 as the newest version of their Qt renderer focused on functional safety for rendering user interface elements of utmost importance such as critical interfaces within automobiles and airplanes...

Linux 6.3 RAS/EDAC Changes Bring New Features For Intel & AMD

Phoronix - Mon, 02/20/2023 - 19:06
Among the early pull requests for the now-open Linux 6.3 merge window are the RAS (Reliability, Availability and Serviceability) and EDAC (Error Detection And Correction) updates...

4 questions open source engineers should ask to mitigate risk at scale

opensource.com - Mon, 02/20/2023 - 16:00
4 questions open source engineers should ask to mitigate risk at scale kathryn.xtang@… Mon, 02/20/2023 - 03:00

At Shopify, we use and maintain a lot of open source projects, and every year we prepare for Black Friday Cyber Monday (BFCM) and other high-traffic events to make sure our merchants can sell to their buyers. To do this, we built an infrastructure platform at a large scale that is highly complex, interconnected, globally distributed, requiring thoughtful technology investments from a network of teams. We’re changing how the internet works, where no single person can oversee the full design and detail at our scale.

Over BFCM 2022, we served 75.98M requests per minute to our commerce platform at peak. That’s 1.27M requests per second. Working at this massive scale in a complex and interdependent system, it would be impossible to identify and mitigate every possible risk. This article breaks down a high-level risk mitigation process into four questions that can be applied to nearly any scenario to help you make the best use of your time and resources available.

1. What are the risks?

To inform mitigation decisions, you must first understand the current state of affairs. We expand our breadth of knowledge by learning from people from all corners of the platform. We run “what could go wrong” (WCGW) exercises where anyone building or interested in infrastructure can highlight a risk. These can be technology risks, operational risks, or something else. Having this unfiltered list is a great way to get a broad understanding of what could happen.

The goal here is visibility.

2. What is worth mitigating?

Great brainstorming leaves us with a large and daunting list of risks. With limited time to fix things, the key is to prioritize what is most important to our business. To do this, we vote on risks, then gather technical experts to discuss highest ranked risks in more detail, including their likelihood and severity. We make decisions about what and how to mitigate, and which team will own each action item.

The goal here is to optimize how we spend our time.

3. Who makes what decisions?

In any organization, there are times when waiting for a perfect consensus is not possible or not effective. Shopify moves tremendously fast because we make sure to identify decision makers, then empower them to gather input, weigh risks/rewards, and come to a decision. Often the decision is best made by the subject matter expert or who bears the most benefit or repercussions of whatever direction we choose.

The goal here is to align incentives and accountability.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets 4. How do you communicate?

We move fast but still need to keep stakeholders and close collaborators informed. We summarize key findings and risks from our WCGW exercises so that we all land on the same page about our risk profile. This may include key risks or single points of failure. We over-communicate so that we’re aligned and aware and stakeholders have opportunities to interject.

The goal here is alignment and awareness.

Solving the right things when there is uncertainty

Underlying all these questions is the uncertainty in our working environment. You never have all the facts or know exactly which components will fail when and how. The best way to deal with uncertainty is by using probability.

Expert poker players know that great bets don’t always yield great outcomes, and bad bets don’t always yield bad outcomes. What’s important is to bet on the probability of outcomes, where over enough rounds, your results will converge to expectation. The same applies in engineering, where we constantly make bets and learn from them. Great bets require clearly distinguishing the quality of your decisions versus outcomes. It means not over-indexing on bad decisions that led to lucky outcomes or great decisions that happen to run into very unlucky scenarios.

Knowing that we can’t control everything also helps us stay calm, which is vital for us to practice good judgment in high-pressure situations.

When it comes to BFCM (and life in general), no one can predict the future or fully protect against all risks. The question is, what would you change looking back? In hindsight, would you feel confident that you prioritized the most important things and made thoughtful bets using the information available? Did you facilitate meaningful discussions with the right people? Could you justify your actions to your customers and their customers?

This article originally appeared on Planning in Bets: Risk Mitigation at Scale and is republished with permission.

What do you do with a finite amount of time to deal with an infinite number of things that can go wrong? 

Image by:

Opensource.com

Business DevOps SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Kubernetes policy engines: OPA vs. Kyverno vs. jsPolicy

opensource.com - Mon, 02/20/2023 - 16:00
Kubernetes policy engines: OPA vs. Kyverno vs. jsPolicy JosephEshiett Mon, 02/20/2023 - 03:00

A Kubernetes policy engine is essential for keeping your cluster safe and ensuring policies are set correctly at the outset. For example, you probably need a policy to control who has the authority to set a privileged pod. These engines define what end users can do on the cluster and ensure that clusters can communicate. Any time a Kubernetes object is created, a policy evaluates and validates or mutates the request. Policies can apply across a namespace or different pods with a specific label in the cluster.

Kubernetes policy engines block objects that could harm or affect the cluster if they don't meet the policy's requirements. Using policies enables users to build complex configurations that other tools, such as Terraform or Ansible, cannot achieve.

The policy landscape has evolved in recent years, and the number of policy engines available continues to increase. Newer products compete against well-established tools.

This article highlights some features you should look for in a policy engine and where these three examples excel and underperform. It compares three popular open source policy engines, Open Policy Agent (OPA), Kyverno, and jsPolicy.

Policy engine features

I'll begin by listing various features so you can compare the policy engines:

  • Supported language: A policy engine must use a language supported by Kubernetes for easy management of policy resources.
  • Validation: Validation rules decide the properties with which a resource can be created. Resources are validated when they are checked against the rules and accepted.
  • Mutation: Mutation rules can modify specific resources in the cluster. These rules modify a particular object in a given way.
  • Tooling for development and testing: These tools test a set of resources against one or more policies to compare the resources against your desired results (declared in a separate file).
  • Package management: Package management handles where your rules are stored and how they are managed in the cluster.
  • Image verification: The use of policies to verify and sign container images.
  • Extensions: Custom-built functions and plugins that extend and implement functionality, like support for new protocols.
  • Metrics: Monitoring any applied changes to policies, activities related to incoming requests, and the results produced as an outcome.
Open Policy Agent (OPA)

Open Policy Agent (OPA) is an easy-to-use policy engine that can be colocated with your service and incorporated as a sidecar, host-level daemon, or library. OPA is a general-purpose engine that manages policies across several stacks, and you can utilize it for other tasks like data filtering and CI/CD pipelines.

OPA allows you to decouple your policies from your infrastructure, service, or application so that people responsible for policy management can control the policy separately from the service. You can also decouple policies from any software service you like and write content-aware policies using any context you want. Decoupling policies will help you build services at scale, improve the capacity to locate violations and conflicts, and reduce the risk of human errors.

OPA policies use a language called Rego. Rego is a query language that extends Datalog to support structured data models such as JSON. OPA provides a framework to write tests for your policies. This framework speeds up the development of new rules and reduces the time to modify existing ones. OPA can also report performance metrics at runtime. These metrics can be requested on individual API calls and are returned in line with the API response.

OPA works by making decisions on policies, not necessarily enforcing them. When a query is sent into the system, it gets passed to OPA, which then validates the query against the policies in place and makes a decision. OPA makes policy decisions by comparing the query input against policies and data. OPA and Rego are domain-agnostic, meaning you can describe any invariant in the policies. Also, policy decisions are not restricted to yes/no or allow/deny answers. Like query inputs, your policies can create structured data as an output.

Kyverno

Kyverno is a Kubernetes policy engine that employs a declarative management paradigm to construct policies for changing, validating, or generating resources or configurations. In contrast to OPA, which uses policy as code, you specify the code rather than write it, and Kyverno then figures out how to execute it. Kyverno uses YAML, so these policies are taken as Kubernetes resources and can be written without learning a new language. This makes it easy to view and process policy results. Kyverno outshines OPA here, as developing code in the Rego language can be difficult, especially without in-depth knowledge.

Kyverno works well with other developer tools like Git and kubectl. Validation rules are the primary use case for admission controllers like Kyverno, which makes it very easy to validate resources that respect the policy rules when creating them. Kyverno uses Cosign to verify and sign images. If the image is not found in the OCI registry or was not signed using the specified key, the policy rule will not validate it. Also, Kyverno uses Grafana to expose and collect metrics from the cluster. It simplifies and consolidates policy distribution using a container registry (OCI registry).

Kyverno works as a dynamic admission controller, receiving HTTP callbacks from the Kubernetes API server and applying matching policies to these callbacks. The policies match the resources using selectors like name, kind, and label. Kyverno uses a webhook as the controller as it handles admission review requests from the server. In the webhook, a monitor creates and manages all the required configurations. There's a generator controller that generates requests and manages the span of generated resources, and a policy controller that creates, updates, deletes, and watches policy resources, running background scans at intervals to decide what course of action to take.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles jsPolicy

jsPolicy is an open source policy engine for Kubernetes that lets users build policies using JavaScript or TypeScript. Managing policies with JavaScript is less complex and more straightforward. Due to the widespread use of the JavaScript programming language, frameworks, and numerous libraries and modules, jsPolicy is a natural choice as a policy engine tool. Kyverno and OPA are more difficult to alter and validate than jsPolicy. Its distribution uses npm packages and features a built-in JavaScript SDK for creating and packaging policies.

jsPolicy is the first policy engine to have controller policies (policies that respond to Kubernetes events). This feature lets you do something on Kubernetes events and validate or mutate them using jsPolicy. Like OPA, jsPolicy is a policy-as-code platform. However, it was created to solve the Rego language problem and provide some functionality for some features not available on Kyverno.

You can use kubectl and the Kubernetes API with jsPolicy, and every request that comes in persists in etcd. Before the requests get persisted, a webhook manager will execute the policy inside your cluster. It uses prebuilt JavaScript sandboxes in your cluster to aid policy execution, increasing efficiency and speed. A policy compiler reads the jsPolicy code and compiles it into a policy bundle, which is placed and run in the sandbox to provide policy violations. The policy violation is queried for alerting and auditing objects that violate the policy code. Since jsPolicy lets you work with JavaScript, you can use the entire JavaScript ecosystem with its great dev tools and frameworks for testing to write, test, and maintain policies.

Compare Kubernetes policy engines

The features of each policy engine differ. While they can all validate and mutate resources, they differ in other specific functions. For instance, OPA and Kyverno support extensions, but jsPolicy does not. The summary below compares the features of these three policies:

Image by:

(Joseph Eshiett, CC BY-SA 4.0)

  • OPA
    • Language: Rego
    • Validation: Yes
    • Mutation: Alpha
    • Development/testing: Limited
    • Package management: NA
    • Image validation: Yes
    • Extensions: Yes
    • Metrics: Prometheus
  • Kyverno
    • Language: YAML
    • Validation: Yes
    • Mutation: Yes
    • Development/testing: Limited
    • Package management: NA
    • Image validation: Yes
    • Extensions: Yes
    • Metrics: Grafana
  • jsPolicy
    • Language: JavaScript
    • Validation: Yes
    • Mutation: Yes
    • Development/testing: Extensive
    • Package management: npm
    • Image validation: No
    • Extensions: No
    • Metrics: Prometheus
Wrap up

This article discussed the concepts surrounding Kubernetes policy engines and compared three different Kubernetes policy engines: OPA, Kyverno, and jsPolicy.

Deciding which engine to use depends on your personal preference. If you'd like a more direct and simple approach, or if you're well-versed in JavaScript and TypeScript, you should use jsPolicy. But, if you prefer YAML and want to stick with working directly with Kubernetes resources, Kyverno is a good option, too.

Learn what features to look for in a Kubernetes policy engine.

Image by:

Opensource.com

Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to Use ‘tee’ Command in Linux [8 Useful Examples]

Tecmint - Mon, 02/20/2023 - 14:08
The post How to Use ‘tee’ Command in Linux [8 Useful Examples] first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Almost all power users prefer to use the command line interface while interacting with Linux systems. By default, all Linux commands display their output on the standard output stream. However, sometimes we need to

The post How to Use ‘tee’ Command in Linux [8 Useful Examples] first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Pages