Open-source News

SteamOS 3.2 Beta Brings Improved Fan Control, Experimental Refresh Rate Switching

Phoronix - Thu, 04/28/2022 - 17:03
Valve overnight released a beta of SteamOS 3.2 with some notable improvements for Steam Deck users...

Etnaviv Open-Source Driver Adds GC7000 r6204 GPU Support For The NXP i.MX 8M Plus

Phoronix - Thu, 04/28/2022 - 16:43
One of Mesa's smaller drivers that continues advancing but not receiving as much attention as the big names is Etnaviv for providing open-source, reverse-engineered graphics support for Vivante graphics IP used across different SoCs...

Create a blog post series with navigation in Jekyll

opensource.com - Thu, 04/28/2022 - 15:00
Create a blog post series with navigation in Jekyll Ayush Sharma Thu, 04/28/2022 - 03:00 Up Register or Login to like.

Blogging about individual self-contained ideas is great. However, some ideas require a more structured approach. Combining simple concepts into one big whole is a wonderful journey for both the writer and the reader, so I wanted to add a series feature to my Jekyll blog. As you may have guessed already, Jekyll's high degree of customization makes this a breeze.

Goal

I want to achieve the following goals:

  1. Each article should list the other articles in the same series.
  2. To simplify content discovery, the home page should display all series in a category.
  3. Moving articles into different series should be easy since they may evolve over time.
Step 1: Add series metadata to posts

Given Jekyll's high customizability, there are several ways to handle a series. I can leverage Jekyll variables in the config to keep a series list, use collections, or define a Liquid list somewhere in a global template and iterate over it.

The cleanest way is to list the series and the posts contained in that series. For example, for all the posts in the Jekyll series, I've added the following two variables in the post front matter:

is_series: true
series_title: "Jekyll"

The first variable, is_series, is a simple boolean which says whether this post is part of a series. Booleans work great with Liquid filters and allow me to filter only those posts which are part of a series. This comes in handy later on when I'm trying to list all the series in one go.

The second variable, series_title, is the title of this series. In this case, it is Jekyll. It's important that posts in the same series contain the same title. I'll use this title to match posts to a series. If it contains extra spaces or special characters, it won't match the series.

You can view the source code here.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Step 2: Add links to posts

With the series defined, I now need to show other articles in the series. If I see a post in the Jekyll series, there should be a list of other articles in the same series. A series won't make sense without this essential navigation.

My blog uses the posts layout to display posts. To show other posts in the same series as the currently viewed post, I use the code below:

>
{% if page.is_series == true %}
"text-success p-3 pb-0">{{ page.series_title | upcase }} series>
{% assign posts = site.posts | where: "is_series", true | where: "series_title", page.series_title | sort: 'date' %}
 
{% for post in posts %}
        {% if post.title == page.title %}
 

"nav-link bullet-pointer mb-0">{{ post.title }}

>
       {% else %}
 "nav-link bullet-hash" href="{{ post.url }}">{{ post.title }}>
       {% endif %}
{% endfor %}

{% endif %}{% endraw %}

The logic above is as follows:

  1. Check if the is_series boolean of the current page is true, meaning the post is part of a series.
  2. Fetch posts where is_series is true and series_title is the current series_title. Sort these in ascending date order.
  3. Display links to other posts in the series or show a non-clickable span if the list item is the current post.

I've stripped some HTML out for clarity, but you can view the complete source code here.

Step 3: Add links to each series to the home page

I now have the post pages showing links to other posts in the same series. Next, I want to add a navigation option to all series under a category on my home page.

For example, the Technology section should show all series in the Technology series on the home page. The same for Life Stuff, Video Games, and META categories. This makes it easier for users to find and read a complete series.

>
{% assign series = "" | split: "," %}
{% assign series_post = "" | split: "," %}
{% assign posts = site.posts | where:"Category", cat.title | where: "is_series",true | sort: 'date' %}

{% for post in posts %}
{% unless series contains post.series_title %}
{% assign series = series | push: post.series_title %}
{% assign series_post = series_post | push: post %}
{% endunless %}
{% endfor %}

{% if series.size > 0 %}
"row m-1 row-cols-1 row-cols-md-4 g-3 align-items-center">
"col">
"h3 text-success">Article series →>
>
   {% for post in series_post %}
        {% include card-link.html url=post.url title=post.series_title %}
    {% endfor %}
>
{% endif %}
{% endfor %}{% endraw %}

To identify all series for a particular category, I use the code above, which accomplishes the following:

  1. Initializes two variables: one for series names and another for the first post of each series.
  2. Fetches all posts that have is_series set to true and belong to the current category.
  3. Adds the series_title to the series names array and the first post to the series post array.
  4. Displays the name of the series, which links to the first post in that series.

You can find the full source code here.

Why I love using Jekyll for blogging

Jekyll's high degree of customization is why I enjoy working with it so much. It's also why my blog's underlying Jekyll engine has survived redesigns and refactors. Jekyll makes it easy to add dynamic logic to your otherwise static website. And while my website remains static, the logic that renders it doesn't have to be.

You can make many improvements to what I've shown you today.

One improvement I'm thinking of is handling series post ordering. For example, the posts in a series are currently shown in ascending order of their publish date. I've published several posts belonging to a series at different times, so I can add a series_order key and use it to order articles by topic rather than by publish date. This is one of the many ways you can build your own series feature.

Happy coding :)

This article originally appeared on the author's blog and has been republished with permission.

Jekyll's high degree of customization makes it easy to add dynamic logic to your otherwise static website.

Image by:

Jonas Leupe on Unsplash

Web development Open Studio What to read next A practical guide to light and dark mode in Jekyll How I dynamically generate Jekyll config files This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Why use Apache Druid for your open source analytics database

opensource.com - Thu, 04/28/2022 - 15:00
Why use Apache Druid for your open source analytics database David Wang Thu, 04/28/2022 - 03:00 Up Register or Login to like.

Analytics isn't just for internal stakeholders anymore. If you're building an analytics application for customers, you're probably wondering what the right database backend is for you.

Your natural instinct might be to use what you know, like PostgreSQL or MySQL. You might even think to extend a data warehouse beyond its core BI dashboards and reports. Analytics for external users is an important feature, though, so you need the right tool for the job.

The key to answering this comes down to user experience. Here are some key technical considerations for users of your external analytics apps.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Avoid delays with Apache Druid

The waiting game of processing queries in a queue can be annoying. The root cause of delays comes down to the amount of data you're analyzing, the processing power of the database, and the number of users and API calls, along with the ability for the database to keep up with the application.

There are a few ways to build an interactive data experience with any generic Online Analytical Processing (OLAP) database when there's a lot of data, but they come at a cost. Pre-computing queries makes architecture very expensive and rigid. Aggregating the data first can minimize insight. Limiting the data analyzed to only recent events doesn't give your users the complete picture.

The "no compromise" answer is an optimized architecture and data format built for interactivity at scale, which is precisely what Apache Druid, a real-time database designed to power modern analytics applications, provides.

  • First, Druid has a unique distributed and elastic architecture that pre-fetches data from a shared data layer into a near-infinite cluster of data servers. This architecture enables faster performance than a decoupled query engine like a cloud data warehouse because there's no data to move and more scalability than a scale-up database like PostgreSQL and MySQL.
  • Second, Druid employs automatic (sometimes called "automagic") multi-level indexing built right into the data format to drive more queries per core. This is beyond the typical OLAP columnar format with the addition of a global index, data dictionary, and bitmap index. This maximizes CPU cycles for faster crunching.
High Availability can't be a "nice to have"

If you and your dev team build a backend for internal reporting, does it really matter if it goes down for a few minutes or even longer? Not really. That's why there's always been tolerance for unplanned downtime and maintenance windows in classical OLAP databases and data warehouses.

But now your team is building an external analytics application for customers. They notice outages, and it can impact customer satisfaction, revenue, and definitely your weekend. It's why resiliency, both high availability and data durability, needs to be a top consideration in the database for external analytics applications.

Rethinking resiliency requires thinking about the design criteria. Can you protect from a node or a cluster-wide failure? How bad would it be to lose data, and what work is involved to protect your app and your data?

Servers fail. The default way to build resiliency is to replicate nodes and remember to make backups. But if you're building apps for customers, the sensitivity to data loss is much higher. The occasional backup is just not going to cut it.

The easiest answer is built right into Apache Druid's core architecture. Designed to withstand anything without losing data (even recent events), Apache Druid features a capable and simple approach to resiliency.

Druid implements High Availability (HA) and durability based on automatic, multi-level replication with shared data in object storage. It enables the HA properties you expect, and what you can think of as continuous backup to automatically protect and restore the latest state of the database even if you lose your entire cluster.

More users should be a good thing

The best applications have the most active users and engaging experience, and for those reasons architecting your back end for high concurrency is important. The last thing you want are frustrated customers because applications are getting hung up. Architecting for internal reporting is different because the concurrent user count is much smaller and finite. The reality is that the database you use for internal reporting probably just isn't the right fit for highly-concurrent applications.

Architecting a database for high concurrency comes down to striking the right balance between CPU usage, scalability, and cost. The default answer for addressing concurrency is to throw more hardware at it. Logic says that if you increase the number of CPUs, you'll be able to run more queries. While true, this can also be a costly approach.

A better approach is to look at a database like Apache Druid with an optimized storage and query engine that drives down CPU usage. The operative word is "optimized." A database shouldn't read data that it doesn't have to. Use something that lets your infrastructure serve more queries in the same time span.

Saving money is a big reason why developers turn to Apache Druid for their external analytics applications. Apache Druid has a highly optimized data format that uses a combination of multi-level indexing, borrowed from the search engine world, along with data reduction algorithms to minimize the amount of processing required.

The net result is that Apache Druid delivers far more efficient processing than anything else out there. It can support from tens to thousands of queries per second at Terabyte or even Petabyte scale.

Build what you need today but future-proof it

Your external analytics applications are critical for your users. It's important to build the right data architecture.

The last thing you want is to start with the wrong database, and then deal with the headaches as you scale. Thankfully, Apache Druid can start small and easily scale to support any app imaginable. Apache Druid has excellent documentation, and of course it's open source, so you can try it and get up to speed quickly.

Your external analytics applications are critical for your users. It's important to build the right data architecture.

Image by:

Opensource.com

Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Red Hat’s The State of Enterprise Open Source report : Telecommunications industry highlights

Red Hat News - Thu, 04/28/2022 - 12:00

Red Hat’s fourth annual The State of Enterprise Open Source report highlights how organizations have adapted to new open source tools and technologies, whether due to external events, or through proactive choice in selecting methods and implementations which can provide competitive advantage.

Mesa 22.1-rc3 Released With Backports For Intel Raptor Lake P, Zink/Kopper On Windows

Phoronix - Thu, 04/28/2022 - 07:38
Mesa 22.1 is gearing up for release in early to mid May while out today is the third weekly release candidate. Mesa 22.1-rc3 continues in back-porting many fixes and improvements from the feature code building up for next quarter's Mesa 22.2...

LFPH Completes the Proof-of-Concept of its GCCN Trust Registry Network

The Linux Foundation - Thu, 04/28/2022 - 04:26

This article originally appeared on the LF Public Health project’s blog. We republished it here to help spread the word about another impactful project made possible through open source. 

Linux Foundation Public Health (LFPH) launched the Global COVID Certificate Network (GCCN) project in June 2021 to facilitate the safe and free movement of individuals globally during the COVID pandemic. After nine months of dedicated work, LFPH completed the proof-of-concept (POC) of the GCCN Trust Registry Network in partnership with Fraunhofer Institute for Industrial Engineering (Fraunhofer IAO)Symsoft Solutions and Finema in March 2022.

With the ambition to provide a complete suite of technology to address the many challenges for COVID certificates, such as interoperability, data security and privacy protection, LFPH began the GCCN project focusing on one of the challenges not being addressed—a global trust architecture that allows seamless integration of the disparate COVID credential types. At the time, many small and large centralized trust ecosystems that implemented different technical standards and policies, such as the EU Digital COVID Certificate, emerged and began to gain traction. However, without a platform that allows these ecosystems to discover and establish trust with each other, there wouldn’t be interoperability at the global level. The GCCN Trust Registry Network was created to solve exactly this problem.

“We started the GCCN work in response to COVID, but everything we do has a vision for solving the challenge of people needing multiple credentials and constant verifications. The GCCN Trust Registry Network makes possible a new, decentralized way of trust management, which helps revolutionize how identities are shared in a privacy-preserving way. At LFPH, we are dedicated to open source innovation for public health and patient identity. We look forward to working with our members, community and stakeholders to advance the GCCN work both in the US and internationally.” – Jim St.Clair, Executive Director of LFPH

Building on the open source TRAIN Trust Management Infrastructure funded by the European Self-Sovereign Identity Framework (ESSIF) Lab, the GCCN Trust Registry Network allows different COVID certificate ecosystems, which can be a political and economic union (e.g. the EU), a nation state (e.g. India), a jurisdiction (e.g. the State of California), an industry organization (e.g. ICAO) or a company (e.g. a COVID test administrator), to join and find each other on a multi-stakeholder network, and validate each other’s COVID certificate policies. This interaction is known as a discovery mechanism. Then based on the discovery, verifiers will decide whose certificates they accept and use the Trust Registry Network to build a customized trust list based on their entry rules and check the source of incoming certificates against their known list to determine if it’s from a trusted source. If the certificate is from a trusted source, the verifiers will be able to use the public key to decrypt and decode a COVID certificate. For more information about the technical mechanism behind the GCCN Trust Registry Network and how it works, please see our two recent articles, “How does a border control officer know if a COVID certificate is valid?” and “How does a border control officer know if a traveler meets entry rules?”.


.avia-image-container.av-l2hq8cav-e4484434c22ad27d0013ca334b9deea7 .av-image-caption-overlay-center{ color:#ffffff; }

The GCCN Trust Registry Network PoC is composed of two parts, onboarding to the Network and verification of COVID certificates using the Network. The PoC wouldn’t have been a success without the contributions of these partners and the ongoing support of the LFPH community. Fraunhofer IAO, the German research organization that developed the TRAIN Infrastructure, supported the effort throughout. Symsoft Solutions, a US-based enterprise web solutions provider, built the initial demo web application of the Network and web interface for the onboarding process of the POC. Savita Farooqui, the founder of Symsoft Solutions, has been co-leading the design and technical development of GCCN with LFPH staff. Finema, a Thai company specializing in decentralized identity solutions, developed the verifier app for the POC that demonstrates how a verifier can leverage the Network for verifications.

“By working with the LFPH team on the GCCN Trust Registry Network initiative, we had the opportunity to explore and extend the TRAIN Infrastructure for COVID certificate trust management. Prior to this work, TRAIN was already implemented for a variety of use cases such as IoT/Industry 4.0, verification of refugee educational documents. We believe that TRAIN will be able to provide lightweight solutions pertaining to trust management on a global scale for a wide range of public health scenarios. We are looking forward to working on the further developments of the GCCN Trust Registry Network based on the stakeholders’ needs for COVID and beyond.” – Isaac Henderson, Technical Architect, Fraunhofer IAO.

The GCCN Trust Registry Network provides a model for managing global, distributed trust registries/authorities. The Network enrolls trust registries/authorities as entries and supports the structure and meta-data for a variety of trust registries, along with a mechanism to access and update the entries using machine and human accessible formats. We worked with the LFPH team to define the meta-data and workflows for enrollment, and developed the demo application to validate these requirements and the POC interface to integrate with the TRAIN infrastructure. We look forward to continuing to work with LFPH and other partners to further develop the GCCN Trust Registry Network and create a reusable trust management solution for use cases beyond COVID. – Savita Farooqui, Founder, Symsoft Solutions

Finema’s solution plays a big part in the verification of different digital vaccine credentials for the Thailand Pass portal that has been a major factor in reopening Thailand’s borders and encouraging global travel. Through that work, we saw and experienced a clear need for a highly secure global trust network that promotes greater interconnectivity and interoperability between various COVID vaccination credentials from different nations, organizations and individuals throughout the world. Finema was happy to support the POC development of the GCCN Trust Registry Network through our solutions, and we look forward to building further on this work for border reopening and other use cases.  – Pakorn Leesakul, CEO, Finema Co. Ltd.

LFPH will host two webinars about the POC: on May 10, 2022 at 8 am ET / 2 pm CEST, and May 11, 2022 at 7 pm PT / (+1d) 10 am HKT, to have a live demo and Q&A session.

In the meantime, if you have any questions about the GCCN Trust Registry Network and the POC, please email the LFPH team at info@lfph.io.

The post LFPH Completes the Proof-of-Concept of its GCCN Trust Registry Network appeared first on Linux Foundation.

Pages