Open-source News

Linux 5.19 Heavy On Intel Power Management & Thermal Improvements

Phoronix - Wed, 05/25/2022 - 21:30
The power management, ACPI, and thermal control updates are ready for Linux 5.19. This cycle there is a lot of PM/thermal work as usual on the Arm side while Intel also continues with a lot of changes from new hardware support to improving overheat handling of laptops for S0ix handling...

Migrate databases to Kubernetes using Konveyor

opensource.com - Wed, 05/25/2022 - 20:21
Migrate databases to Kubernetes using Konveyor Yasu Katsuno Wed, 05/25/2022 - 08:21 1 reader likes this 1 reader likes this

Kubernetes Database Operator is useful for building scalable database servers as a database (DB) cluster. But because you have to create new artifacts expressed as YAML files, migrating existing databases to Kubernetes requires a lot of manual effort. This article introduces a new open source tool named Konveyor Tackle-DiVA-DOA (Data-intensive Validity Analyzer-Database Operator Adaptation). It automatically generates deployment-ready artifacts for database operator migration. And it does that through datacentric code analysis.

What is Tackle-DiVA-DOA?

Tackle-DiVA-DOA (DOA, for short) is an open source datacentric database configuration analytics tool in Konveyor Tackle. It imports target database configuration files (such as SQL and XML) and generates a set of Kubernetes artifacts for database migration to operators such as Zalando Postgres Operator.

Image by:

(Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)

DOA finds and analyzes the settings of an existing system that uses a database management system (DBMS). Then it generates manifests (YAML files) of Kubernetes and the Postgres operator for deploying an equivalent DB cluster.

Image by:

(Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

Database settings of an application consist of DBMS configurations, SQL files, DB initialization scripts, and program codes to access the DB.

  • DBMS configurations include parameters of DBMS, cluster configuration, and credentials. DOA stores the configuration to postgres.yaml and secrets to secret-db.yaml if you need custom credentials.
     
  • SQL files are used to define and initialize tables, views, and other entities in the database. These are stored in the Kubernetes ConfigMap definition cm-sqls.yaml.
     
  • Database initialization scripts typically create databases and schema and grant users access to the DB entities so that SQL files work correctly. DOA tries to find initialization requirements from scripts and documents or guesses if it can't. The result will also be stored in a ConfigMap named cm-init-db.yaml.
     
  • Code to access the database, such as host and database name, is in some cases embedded in program code. These are rewritten to work with the migrated DB cluster.
Tutorial

DOA is expected to run within a container and comes with a script to build its image. Make sure Docker and Bash are installed on your environment, and then run the build script as follows:

$ cd /tmp
$ git clone https://github.com/konveyor/tackle-diva.git
$ cd tackle-diva/doa
$ bash util/build.sh

docker image ls diva-doa
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
diva-doa     2.2.0     5f9dd8f9f0eb   14 hours ago   1.27GB
diva-doa     latest    5f9dd8f9f0eb   14 hours ago   1.27GB

This builds DOA and packs as container images. Now DOA is ready to use.

The next step executes a bundled run-doa.sh wrapper script, which runs the DOA container. Specify the Git repository of the target database application. This example uses a Postgres database in the TradeApp application. You can use the -o option for the location of output files and an -i option for the name of the database initialization script:

$ cd /tmp/tackle-diva/doa
$ bash run-doa.sh -o /tmp/out -i start_up.sh \
      https://github.com/saud-aslam/trading-app
[OK] successfully completed.

The /tmp/out/ directory and /tmp/out/trading-app, a directory with the target application name, are created. In this example, the application name is trading-app, which is the GitHub repository name. Generated artifacts (the YAML files) are also generated under the application-name directory:

$ ls -FR /tmp/out/trading-app/
/tmp/out/trading-app/:
cm-init-db.yaml  cm-sqls.yaml  create.sh*  delete.sh*  job-init.yaml  postgres.yaml  test/

/tmp/out/trading-app/test:
pod-test.yaml

The prefix of each YAML file denotes the kind of resource that the file defines. For instance, each cm-*.yaml file defines a ConfigMap, and job-init.yaml defines a Job resource. At this point, secret-db.yaml is not created, and DOA uses credentials that the Postgres operator automatically generates.

Now you have the resource definitions required to deploy a PostgreSQL cluster on a Kubernetes instance. You can deploy them using the utility script create.sh. Alternatively, you can use the kubectl create command:

$ cd /tmp/out/trading-app
$ bash create.sh  # or simply “kubectl apply -f .”

configmap/trading-app-cm-init-db created
configmap/trading-app-cm-sqls created
job.batch/trading-app-init created
postgresql.acid.zalan.do/diva-trading-app-db created

The Kubernetes resources are created, including postgresql (a resource of the database cluster created by the Postgres operator), service, rs, pod, job, cm, secret, pv, and pvc. For example, you can see four database pods named trading-app-*, because the number of database instances is defined as four in postgres.yaml.

$ kubectl get all,postgresql,cm,secret,pv,pvc
NAME                                        READY   STATUS      RESTARTS   AGE

pod/trading-app-db-0                        1/1     Running     0          7m11s
pod/trading-app-db-1                        1/1     Running     0          5m
pod/trading-app-db-2                        1/1     Running     0          4m14s
pod/trading-app-db-3                        1/1     Running     0          4m

NAME                                      TEAM          VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgresql.acid.zalan.do/trading-app-db   trading-app   13        4      1Gi                                     15m   Running

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/trading-app-db          ClusterIP   10.97.59.252    <none>        5432/TCP   15m
service/trading-app-db-repl     ClusterIP   10.108.49.133   <none>        5432/TCP   15m

NAME                         COMPLETIONS   DURATION   AGE
job.batch/trading-app-init   1/1           2m39s      15m

Note that the Postgres operator comes with a user interface (UI). You can find the created cluster on the UI. You need to export the endpoint URL to open the UI on a browser. If you use minikube, do as follows:

$ minikube service postgres-operator-ui

Then a browser window automatically opens that shows the UI.

Image by:

(Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)

Now you can get access to the database instances using a test pod. DOA also generated a pod definition for testing.

$ kubectl apply -f /tmp/out/trading-app/test/pod-test.yaml # creates a test Pod
pod/trading-app-test created
$ kubectl exec trading-app-test -it -- bash  # login to the pod

The database hostname and the credential to access the DB are injected into the pod, so you can access the database using them. Execute the psql metacommand to show all tables and views (in a database):

# printenv DB_HOST; printenv PGPASSWORD
(values of the variable are shown)

# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dt'
             List of relations
 Schema |      Name      | Type  |  Owner  
--------+----------------+-------+----------
 public | account        | table | postgres
 public | quote          | table | postgres
 public | security_order | table | postgres
 public | trader         | table | postgres
(4 rows)

# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dv'
                List of relations
 Schema |         Name          | Type |  Owner  
--------+-----------------------+------+----------
 public | pg_stat_kcache        | view | postgres
 public | pg_stat_kcache_detail | view | postgres
 public | pg_stat_statements    | view | postgres
 public | position              | view | postgres
(4 rows)

After the test is done, log out from the pod and remove the test pod:

# exit
$ kubectl delete -f /tmp/out/trading-app/test/pod-test.yaml

Finally, delete the created cluster using a script:

$ bash delete.shWelcome to Konveyor Tackle world!

To learn more about application refactoring, you can check out the Konveyor Tackle site, join the community, and access the source code on GitHub.

Konveyor Tackle-DiVA-DOA helps database engineers easily migrate database servers to Kubernetes.

Kubernetes Upstream Communities What to read next Refactor your applications to Kubernetes This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 31 points | Follow shinsa82 Open Enthusiast Author Register or Login to post a comment.

TUXEDO Aura 15 Gen2 - AMD Ryzen 5000 Series Powered, Linux Laptop

Phoronix - Wed, 05/25/2022 - 19:00
Bavarian PC vendor TUXEDO Computers that specializes in various Linux pre-loaded notebooks and desktop computers recently launched their Aura 15 Gen2 laptop focused on being an "affordable business allrounder" and powered by AMD Ryzen 5000 series processors with integrated Vega graphics to make for a nice open-source driver experience. TUXEDO sent over the Aura 15 Gen2 for a round of testing and here's a look at this Ubuntu Linux laptop's performance and capabilities.

GCC 13 Compiler Finally Adds Support For AMD GFX90A "Aldebaran"

Phoronix - Wed, 05/25/2022 - 18:28
It was over a year ago that AMD initially added the "GFX90A" target to their LLVM AMDGPU compiler back-end while now this week added to the GNU Compiler Collection for the GCC 13 release not due out until next year is its GFX90A support for the GNU toolchain...

Nearly Half A Million Lines Of New Graphics Driver Code Sent In For Linux 5.19

Phoronix - Wed, 05/25/2022 - 18:02
David Airlie this morning sent in the Direct Rendering Manager (DRM) subsystem updates for the Linux 5.19 merge window. Most notable with the DRM display/graphics driver updates for this next kernel version is a lot of work on Intel Arc Graphics DG2/Alchemist in getting that support ready plus initial Raptor Lake enablement. as well as AMD preparing for next-generation CDNA Instinct products and RDNA3 Radeon RX 7000 series graphics cards...

Stratis 3.1 Released For Red Hat's Linux Storage Management Solution

Phoronix - Wed, 05/25/2022 - 17:20
It's been five years already since Red Hat started Stratis as a configuration daemon built atop LVM and XFS in aiming to provide advanced storage functionality in user-space akin to what is offered by the advanced Btrfs and ZFS file-systems...

ARMv9 Scalable Matrix Extension Support Lands In Linux 5.19

Phoronix - Wed, 05/25/2022 - 16:40
The 64-bit Arm (AArch64) architecture changes have been merged into the in-development Linux 5.19 kernel...

Linux's RNG Code Continues Modernization Effort With v5.19

Phoronix - Wed, 05/25/2022 - 16:35
Security researcher Jason Donenfeld known as the founder of the WireGuard project has recently been focused on modernizing the Linux kernel's random number generator (RNG/random) code. With the Linux 5.19 kernel there is yet more work landing...

Pages