Open-source News

Extend Kubernetes service discovery with Stork and Quarkus

opensource.com - Mon, 04/04/2022 - 15:00
Extend Kubernetes service discovery with Stork and Quarkus Daniel Oh Mon, 04/04/2022 - 03:00 Up Register or Login to like.

In traditional monolithic architecture, applications already knew where the backend services existed through static hostnames, IP addresses, and ports. The IT operation team maintained the static configurations for service reliability and system stability. This Day 2 operation has significantly changed since microservices began running in distributed networking systems. The change happened because microservices need to communicate with multiple backend services to improve the load balancing and service resiliency.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects

The microservices topology became much more complex as the service applications were containerized and placed on Kubernetes. Because the application containers can be terminated and recreated anytime by Kubernetes, the applications can't know the static information in advance. The microservices don't need to be configured with the static information of the backend applications because Kubernetes handles service discovery, load balancing, and self-healing dynamically and automatically.

However, Kubernetes doesn't support programmatic service discovery and client-based load balancing through integrated application configurations. Smallrye Stork is an open source project to solve this problem, providing the following benefits and features:

  • Augment service discovery capabilities
  • Support for Consul and Kubernetes
  • Custom client load-balancing features
  • Manageable and programmatic APIs

Nevertheless, Java developers need some time to adapt to the Stork project and integrate it with an existing Java framework. Luckily, Quarkus enables developers to plug Stork's features into Java applications. This article demonstrates how Quarkus allows developers to add Stork's features to Java applications.

Create a new Quarkus project using Quarkus CLI

Using the Quarkus command-line tool (CLI), create a new Maven project. The following command will scaffold a new reactive RESTful API application:

$ quarkus create app quarkus-stork-example -x rest-client-reactive,resteasy-reactive  

The output should look like this:

...
[SUCCESS] ✅  quarkus project has been successfully generated in:
--> /Users/danieloh/Downloads/demo/quarkus-stork-example
...

Open a pom.xml file and add the following Stork dependencies: stork-service-discovery-consul and smallrye-mutiny-vertx-consul-client. Find the solution to this example here.


  io.smallrye.stork
  stork-service-discovery-consul


  io.smallrye.reactive
  smallrye-mutiny-vertx-consul-client
Create new services for the discovery

Create two services (hero and villain) that the Stork load balancer will discover. Create a new services directory in src/main/java/org/acme. Then create a new HeroService.java file in src/main/java/org/acme/services.

Add the following code to the HeroService.java file that creates a new HTTP server based on the Vert.x reactive engine:

@ApplicationScoped
public class HeroService {

    @ConfigProperty(name = "hero-service-port", defaultValue = "9000") int port;

    public void init(@Observes StartupEvent ev, Vertx vertx) {
        vertx.createHttpServer()
                .requestHandler(req -> req.response().endAndForget("Super Hero!"))
                .listenAndAwait(port);
    }
   
}

Next, create another service by creating a VillainService.java file. The only difference is that you need to set a different name, port, and return message in the init() method as below:

@ConfigProperty(name = "villain-service-port", defaultValue = "9001") int port;

public void init(@Observes StartupEvent ev, Vertx vertx) {
        vertx.createHttpServer()
                .requestHandler(req -> req.response().endAndForget("Super Villain!"))
                .listenAndAwait(port);
}Register the services to Consul

As I mentioned earlier, Stork allows you to use Consul based on Vert.x Consul Client for the service registration. Create a new ConsulRegistration.java file to register two services with the same name (my-rest-service) in src/main/java/org/acme/services. Finally, add the following ConfigProperty and init() method:

@ApplicationScoped
public class ConsulRegistration {

    @ConfigProperty(name = "consul.host") String host;
    @ConfigProperty(name = "consul.port") int port;

    @ConfigProperty(name = "hero-service-port", defaultValue = "9000") int hero;
    @ConfigProperty(name = "villain-service-port", defaultValue = "9001") int villain;

    public void init(@Observes StartupEvent ev, Vertx vertx) {
        ConsulClient client = ConsulClient.create(vertx, new ConsulClientOptions().setHost(host).setPort(port));

        client.registerServiceAndAwait(
                new ServiceOptions().setPort(hero).setAddress("localhost").setName("my-rest-service").setId("hero"));
        client.registerServiceAndAwait(
                new ServiceOptions().setPort(villain).setAddress("localhost").setName("my-rest-service").setId("villain"));

    }

}Delegate the reactive REST client to Stork

The hero and villain services are normal reactive RESTful services that can be accessed directly by exposable APIs. You need to delegate those services to Stork for service discovery, selection, and calling.

Create a new interface MyRestClient.java file in the src/main/java directory. Then add the following code:

@RegisterRestClient(baseUri = "stork://my-rest-service")
public interface MyRestClient {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    String get();
}

The baseUri that starts with stork:// enables Stork to discover the services and select one based on load balancing type. Next, modify the existing resource file or create a new resource file (MyRestClientResource) to inject the RestClient (MyRestClient) along with the endpoint (/api) as seen below:

@Path("/api")
public class MyRestClientResource {
   
    @RestClient MyRestClient myRestClient;

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    public String invoke() {
        return myRestClient.get();
    }

}

Before you run the application, configure Stork to use the Consul server in the application.properties as shown below:

consul.host=localhost
consul.port=8500

stork.my-rest-service.service-discovery=consul
stork.my-rest-service.service-discovery.consul-host=localhost
stork.my-rest-service.service-discovery.consul-port=8500
stork.my-rest-service.load-balancer=round-robinTest your application

You have several ways to run a local Consul server. For this example, run the server using a container. This approach is probably simpler than installating or referring to an external server. Find more information here.

$ docker run --rm --name consul -p 8500:8500 -p 8501:8501 consul:1.7 agent -dev -ui -client=0.0.0.0 -bind=0.0.0.0 --https-port=8501

Run your Quarkus application using Dev mode:

$ cd quarkus-stork-example
$ quarkus dev

The output looks like this:

...
INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, jaxrs-client-reactive, rest-client-reactive, resteasy-reactive, smallrye-context-propagation, vertx]

--
Tests paused
Press [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>

Access the RESTful API (/api) to retrieve available services based on the round-robin load balancing mechanism. Execute the following curl command-line in your local terminal:

& while true; do curl localhost:8080/api ; echo ''; sleep 1; done

The output should look like this:

Super Villain!
Super Hero!
Super Villain!
Super Hero!
Super Villain!
...Wrap up

You learned how Quarkus enables developers to integrate client-based load balancing programming using Stork and Consul for reactive Java applications. Developers can also have better developer experiences using live coding while they keep developing the reactive programming in Quarkus. For more information about Quarkus, visit the Quarkus guides and practices.

Quarkus enables developers to integrate client-based load balancing programming using Stork and Consul for reactive Java applications.

Java Cloud Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I use the Git for-each-ref command for DevOps

opensource.com - Mon, 04/04/2022 - 15:00
How I use the Git for-each-ref command for DevOps Evan "Hippy" Slatis Mon, 04/04/2022 - 03:00 Up Register or Login to like.

For most of today's developers, using Git is akin to breathing, in that you can't live without it. Along with version control, Git's use has even expanded in recent years into the area of GitOps, or managing and versioning configurations through Git. What a lot of users don't realize or think about is that Git tracks not only file changes for each commit but also a lot of meta-data around commits and branches. Your DevOps can leverage this data or automate IT operations using software development best practices, such as with CI/CD.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles

In my case, I use an automated process (DevOps) whereby a new branch is created every time I promote an image into a downstream CI/CD environment (namespace) in Kubernetes (here is a shameless plug of my Opensource.com article describing the process). This allows me to modify the deployment descriptors for a particular deployment in a downstream CI/CD environment independent of the other environments and enables me to version those changes (GitOps).

I will discuss a typical scenario where a breaking bug is discovered in QA, and no one is sure which build introduced the bug. I can't and don't want to rely on image meta-data to find the branch in Git that holds the proper deployment descriptors for several reasons, especially considering I may need to search one local repository or multiple remote images. So how can I easily leverage the information in a Git repository to find what I am looking for?

Use the for-each-ref command

This scenario is where the for-each-ref command is of some real use. It allows me to search all my Git repository's branches filtered by the naming convention I use (a good reason to enforce naming conventions when creating branches) and returns the most recently modified branches in descending sort order. For example:

$ git clone git@github.com:elcicd/Test-CICD1.git
$ cd Test-CICD1
$ git for-each-ref --format='%(refname:short) (%(committerdate))' \
                   --sort='-committerdate' \
                   'refs/remotes/**/deployment-qa-*'
origin/deployment-qa-c6e94a5 (Wed May 12 19:40:46 2021 -0500)
origin/deployment-qa-b70b438 (Fri Apr 23 15:42:30 2021 -0500)
origin/deployment-qa-347fc1d (Thu Apr 15 17:11:25 2021 -0500)
origin/deployment-qa-c1df9dd (Wed Apr 7 11:10:32 2021 -0500)
origin/deployment-qa-260f8f1 (Tue Apr 6 15:50:05 2021 -0500)

The commands above clone a repository I often use to test Kubernetes deployments. I then use git for-each-ref to search the branches by the date of the last commit, restrict the search to the branches that match the deployment branch naming convention for the QA environment, and return the most recent five. These roughly (i.e., not necessarily, but close enough) correspond to the last five versions of the component/microservice I want to redeploy.

deployment-qa-* is based on the naming convention:

--

The information returned can be used by developers or QA personnel when running the CI/CD redeployment pipeline to decide what version to roll back/forward to in a Kubernetes namespace and thus eventually return to a known good state. This process narrows down when and what introduced the breaking bug in the contrived scenario.

While the naming convention and scenario above are particular to needs and automated CI/CD processes, there are other, more generally useful ways to use for-each-ref. Many organizations have branch naming conventions similar to the following:

--

The ID value refers to the ID describing the feature or bug in a project management system like Rally or Jira; e.g.:

v1.23-feature-12345

This ID allows users to easily and quickly get some added visibility into the greater development history of the repository and project (using refs/remotes/**/v.123-feature-*), depending on the development process and branch naming convention policies. The process works on tags, too, so listing out the latest pre-prod, prod, or other specific versions could be done almost as easily (not all tags are pulled by default).

Wrap up

These are only particular and narrow examples of using the for-each-ref. From authors to commit messages, the official documentation provides insight into many details that can be searched, filtered, and reported.

Search and filter branches and tags for useful information or practical DevOps.

Git DevOps CI/CD What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Top 8 YUM/DNF ThirdParty Repositories for RHEL-Based Linux

Tecmint - Mon, 04/04/2022 - 11:56
The post Top 8 YUM/DNF ThirdParty Repositories for RHEL-Based Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .

YUM (Yellowdog Updater Modified) is an open-source, widely used command-line and graphical-based package management tool for RPM (RedHat Package Manager) based Linux systems, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS,

The post Top 8 YUM/DNF ThirdParty Repositories for RHEL-Based Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.

Linux 5.18-rc1 Released - Many Line Additions Due To Big Chunks From AMD & Intel

Phoronix - Mon, 04/04/2022 - 06:34
Linus Torvalds just released Linux 5.18-rc1 to cap off the two week merge window for Linux 5.18 as the next major version of the Linux kernel...

Linux 5.18 To Try Again For x86/x86_64 "WERROR" Default

Phoronix - Sun, 04/03/2022 - 19:46
The Linux 5.18 merge window is ending today while sent in this morning were a batch of "x86/urgent" updates that include enabling the CONFIG_WERROR knob by default for Linux x86/x86_64 default configuration "defconfig" kernel builds...

Qt 6.3 To Boast Improved Wayland Integration, Easily Allows Custom Shell Extensions

Phoronix - Sun, 04/03/2022 - 18:54
Qt 6.3 is expected for release in the coming weeks and with it comes enhanced Wayland support along with the ability for developers to easily create custom shell extensions...

Uutils 0.0.13 Released For GNU Coreutils Replacement In Rust

Phoronix - Sun, 04/03/2022 - 18:16
Coming together over the past year has been uutils as a Rust-based Coreutils implementation to replace the long-used GNU Components. Since last year Uutils has been good enough to yield a working Debian Linux system at least for the basics while out this weekend is a new version of uutils...

Pages