Open-source News

Packaging Job scripts in Kubernetes operators

opensource.com - Wed, 09/14/2022 - 15:00
Packaging Job scripts in Kubernetes operators Bobby Gryzynger Wed, 09/14/2022 - 03:00

When using a complex Kubernetes operator, you often have to orchestrate Jobs to perform workload tasks. Examples of Job implementations typically provide trivial scripts written directly in the manifest. In any reasonably-complex application, however, determining how to handle more-than-trivial scripts can be challenging.

In the past, I've tackled this problem by including my scripts in an application image. This approach works well enough, but it does have a drawback. Anytime changes are required, I'm forced to rebuild the application image to include the revisions. This is a lot of time wasted, especially when my application image takes a significant amount of time to build. This also means that I'm maintaining both an application image and an operator image. If my operator repository doesn't include the application image, then I'm making related changes across repositories. Ultimately, I'm multiplying the number of commits I make, and complicating my workflow. Every change means I have to manage and synchronize commits and image references between repositories.

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… eBook: Storage Patterns for Kubernetes Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

Given these challenges, I wanted to find a way to keep my Job scripts within my operator's code base. This way, I could revise my scripts in tandem with my operator's reconciliation logic. My goal was to devise a workflow that would only require me to rebuild the operator's image when I needed to make revisions to my scripts. Fortunately, I use the Go programming language, which provides the immensely helpful go:embed feature. This allows developers to package text files in with their application's binary. By leveraging this feature, I've found that I can maintain my Job scripts within my operator's image.

Embed Job script

For demonstration purposes, my task script doesn't include any actual business logic. However, by using an embedded script rather than writing the script directly into the Job manifest, this approach keeps complex scripts both well-organized and abstracted from the Job definition itself.

Here's my simple example script:

$ cat embeds/task.sh
#!/bin/sh
echo "Starting task script."
# Something complicated...
echo "Task complete."

Now to work on the operator's logic.

Operator logic

Here's the process within my operator's reconciliation:

  1. Retrieve the script's contents
  2. Add the script's contents to a ConfigMap
  3. Run the ConfigMap's script within the Job by
    1. Defining a volume that refers to the ConfigMap
    2. Making the volume's contents executable
    3. Mounting the volume to the Job 

Here's the code:

// STEP 1: retrieve the script content from the codebase.
//go:embed embeds/task.sh
var taskScript string

func (r *MyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
        ctxlog := ctrllog.FromContext(ctx)
        myresource := &myresourcev1alpha.MyResource{}
        r.Get(ctx, req.NamespacedName, d)

        // STEP 2: create the ConfigMap with the script's content.
        configmap := &corev1.ConfigMap{}
        err := r.Get(ctx, types.NamespacedName{Name: "my-configmap", Namespace: myresource.Namespace}, configmap)
        if err != nil && apierrors.IsNotFound(err) {

                ctxlog.Info("Creating new ConfigMap")
                configmap := &corev1.ConfigMap{
                        ObjectMeta: metav1.ObjectMeta{
                                Name:      "my-configmap",
                                Namespace: myresource.Namespace,
                        },
                        Data: map[string]string{
                                "task.sh": taskScript,
                        },
                }

                err = ctrl.SetControllerReference(myresource, configmap, r.Scheme)
                if err != nil {
                        return ctrl.Result{}, err
                }
                err = r.Create(ctx, configmap)
                if err != nil {
                        ctxlog.Error(err, "Failed to create ConfigMap")
                        return ctrl.Result{}, err
                }
                return ctrl.Result{Requeue: true}, nil
        }

        // STEP 3: create the Job with the ConfigMap attached as a volume.
        job := &batchv1.Job{}
        err = r.Get(ctx, types.NamespacedName{Name: "my-job", Namespace: myresource.Namespace}, job)
        if err != nil && apierrors.IsNotFound(err) {

                ctxlog.Info("Creating new Job")
                configmapMode := int32(0554)
                job := &batchv1.Job{
                        ObjectMeta: metav1.ObjectMeta{
                                Name:      "my-job",
                                Namespace: myresource.Namespace,
                        },
                        Spec: batchv1.JobSpec{
                                Template: corev1.PodTemplateSpec{
                                        Spec: corev1.PodSpec{
                                                RestartPolicy: corev1.RestartPolicyNever,
                                                // STEP 3a: define the ConfigMap as a volume.
                                                Volumes: []corev1.Volume{{
                                                        Name: "task-script-volume",
                                                        VolumeSource: corev1.VolumeSource{
                                                                ConfigMap: &corev1.ConfigMapVolumeSource{
                                                                        LocalObjectReference: corev1.LocalObjectReference{
                                                                                Name: "my-configmap",
                                                                        },
                                                                        DefaultMode: &configmapMode,
                                                                },
                                                        },
                                                }},
                                                Containers: []corev1.Container{
                                                        {
                                                                Name:  "task",
                                                                Image: "busybox",
                                                                Resources: corev1.ResourceRequirements{
                                                                        Requests: corev1.ResourceList{
                                                                                corev1.ResourceCPU:    *resource.NewMilliQuantity(int64(50), resource.DecimalSI),
                                                                                corev1.ResourceMemory: *resource.NewScaledQuantity(int64(250), resource.Mega),
                                                                        },
                                                                        Limits: corev1.ResourceList{
                                                                                corev1.ResourceCPU:    *resource.NewMilliQuantity(int64(100), resource.DecimalSI),
                                                                                corev1.ResourceMemory: *resource.NewScaledQuantity(int64(500), resource.Mega),
                                                                        },
                                                                },
                                                                // STEP 3b: mount the ConfigMap volume.
                                                                VolumeMounts: []corev1.VolumeMount{{
                                                                        Name:      "task-script-volume",
                                                                        MountPath: "/scripts",
                                                                        ReadOnly:  true,
                                                                }},
                                                                // STEP 3c: run the volume-mounted script.
                                                                Command: []string{"/scripts/task.sh"},
                                                        },
                                                },
                                        },
                                },
                        },
                }

                err = ctrl.SetControllerReference(myresource, job, r.Scheme)
                if err != nil {
                        return ctrl.Result{}, err
                }
                err = r.Create(ctx, job)
                if err != nil {
                        ctxlog.Error(err, "Failed to create Job")
                        return ctrl.Result{}, err
                }
                return ctrl.Result{Requeue: true}, nil
        }

        // Requeue if the job is not complete.
        if *job.Spec.Completions == 0 {
                ctxlog.Info("Requeuing to wait for Job to complete")
                return ctrl.Result{RequeueAfter: time.Second * 15}, nil
        }

        ctxlog.Info("All done")
        return ctrl.Result{}, nil
}

After my operator defines the Job, all that's left to do is wait for the Job to complete. Looking at my operator's logs, I can see each step in the process recorded until the reconciliation is complete:

2022-08-07T18:25:11.739Z  INFO  controller.myresource   Creating new ConfigMap  {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}
2022-08-07T18:25:11.765Z  INFO  controller.myresource   Creating new Job        {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}
2022-08-07T18:25:11.780Z  INFO  controller.myresource   All done        {"reconciler group": "myoperator.myorg.com", "reconciler kind": "MyResource", "name": "myresource-example", "namespace": "default"}Go for Kubernetes

When it comes to managing scripts within operator-managed workloads and applications, go:embed provides a useful mechanism for simplifying the development workflow and abstracting business logic. As your operator and its scripts become more complex, this kind of abstraction and separation of concerns becomes increasingly important for the maintainability and clarity of your operator.

Embed scripts into your Kubernetes operators with Go.

Kubernetes Go programming language What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Suricata – A Intrusion Detection, Prevention, and Security Tool

Tecmint - Wed, 09/14/2022 - 14:07
The post Suricata – A Intrusion Detection, Prevention, and Security Tool first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

Suricata is a powerful, versatile, and open-source threat detection engine that provides functionalities for intrusion detection (IDS), intrusion prevention (IPS), and network security monitoring. It performs deep packet inspection along with pattern matching a

The post Suricata – A Intrusion Detection, Prevention, and Security Tool first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

MGLRU Looks Like One Of The Best Linux Kernel Innovations Of The Year

Phoronix - Wed, 09/14/2022 - 07:35
Hopefully being mainlined next cycle with Linux 6.1 is the Multi-Gen LRU, or better known as MGLRU, as a superior alternative to the kernel's existing page reclamation code. Assuming it lands for Linux 6.1 as the last complete kernel cycle of 2022, this would make it one of the most exciting innovations to make it into the kernel this year...

Mesa Driver Improvement Will Yield Quicker Startup For Counter-Strike: Global Offensive

Phoronix - Wed, 09/14/2022 - 04:15
AMD Linux graphics driver engineer Pierre-Eric Pelloux-Prayer has made an improvement to Mesa's common code that should yield much faster start-up times for Valve's Counter-Strike: Global Offensive...

Open-Source NVIDIA Outlook Brighter Due To GSP Firmware, But Major Challenges Remain

Phoronix - Wed, 09/14/2022 - 03:11
Longtime open-source Linux graphics driver developer and DRM subsystem maintainer, David Airlie of Red Hat, took the stage at Linux Plumbers Conference today to talk about Nouveau and the state of the open-source NVIDIA Linux driver...

Samba 4.17 Released With Some Performance Enhancements

Phoronix - Wed, 09/14/2022 - 02:51
Samba as the open-source re-implementation of the SMB networking protocol for better file and print server interoperability with Microsoft Windows platforms is out with a new release. In the nearly six months since Samba 4.16 was introduced, Samba 4.17 has built up performance improvements/fixes and other enhancements for this widely-used open-source project...

Pages