High Availability Drools Stateless Service in Openshift Origin

openshift-origin-logoHi everyone! On this blog post I wanted to cover a simple example showing how easy it is to scale our Drools Stateless services by using Openshift 3 (Docker and Kubernetes). I will be showing how we can scale our service by provisioning new instances on demand and how these instances are load balanced by Kubernetes using a round robin strategy.

Our Drools Stateless Service

First of all we need a stateless Kie Session to play around with. In these simple example I’ve created a food recommendation service to demonstrate what kind of scenarios you can build up using this approach. All the source code can be found inside the Drools Workshop repository hosted on github: https://github.com/Salaboy/drools-workshop/tree/master/drools-openshift-example

In this project you will find 4 modules:

  • drools-food-model: our business model including the domain classes, such as Ingredient, Sandwich, Salad, etc
  • drools-food-kjar: our business knowledge, here we have our set of rules to describe how the food recommendations will be done.
  • drools-food-services: using wildfly swarm I’m exposing a domain specific service encapsulating the rule engine. Here a set of rest services is exposed so our clients can interact.
  • drools-controller: by using the Kubernetes Java API we can programatically provision new instances of our Food Recommendation Service on demand to the Openshift environment.

Our unit of work will be the Drools-Food-Services project which expose the REST endpoints to interact with our stateless sessions.

You can take a look at the service endpoint which is quite simple: https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/api/FoodRecommendationService.java

Also notice that there is another Service that gives us very basic information about where our Service is running: https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/api/NodeStatsService.java

We will call this service to know exactly which instance of the service is answering our clients later on.

The rules for this example are simple and not doing much, if you are looking to learn Drools, I recommend you to create more meaning full rules and share it with me so we can improve the example 😉 You can take a look at the rules here:

https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-kjar/src/main/resources/rules.drl

As you might expect: Sandwiches for boys and Salads for girls 🙂

One last important thing about our service that is important for you to see is how the rules are being picked up by the Service Endpoint. I’m using the Drools CDI extension to @Inject a KieContainer which is resolved using the KIE-CI module, explained in some of my previous posts.

https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/impl/FoodRecommendationServiceImpl.java#L33

We will bundle this project into a Docker Image that can be started as many times as we want/need. If you have a Docker client installed in your local environment you can start this food recommendation service by looking at the salaboy/drools-food-services image which is hosted in hub.docker.com/salaboy

By starting the Docker image without even knowing what is running inside we immediately notice the following advantages:

  • We don’t need to install Java or any other tool besides Docker
  • We don’t need to configure anything to run our Rest Service
  • We don’t even need to build anything locally due the fact that the image is hosted in hub.docker.com
  • We can run on top of any operating system

At the same time we get notice the following disadvantages:

  • We need to know in which IP and Port our service is exposed by Docker
  • If we run more than one image we need to keep track of all the IPs and Ports and notify to all our clients about those
  • There is no built in way of load balance between different instances of the same docker image instance

For solving these disadvantages Openshift, and more specifically, Kubernetes to our rescue!

Provisioning our Service inside Openshift

As I mention before, if we just start creating new Docker Image instances of our service we soon find out that our clients will need to know about how many instances do we have running and how to contact each of them. This is obviously no good, and for that reason we need an intermediate layer to deal with this problem. Kubernetes provides us with this layer of abstraction and provisioning, which allows us to create multiple instances of our PODs (abstraction on top of the docker image) and configure to it Replication Controllers and Services.

The concept of Replication Controller provides a way to define how many instances should be running our our service at a given time. Replication controllers are in charge of guarantee that if we need at least 3 instances running, those instances are running all the time. If one of these instances died, the replication controller will automatically spawn one for us.

Services in Kubernetes solve the problem of knowing all and every Docker instance details.  Services allows us to provide a Facade for our clients to use to interact with our instances of our Pods. The Service layer also allows us to define a strategy (called session affinity) to define how to load balance our Pod instances behind the service. There are to built in strategies: ClientIP and Round Robin.

So we need to things now, we need an installation of Openshift Origin (v3) and our project Drools Controller which will interact with the Kubernetes REST endpoints to provision our Pods, Replicator Controllers and Services.

For the Openshift installation, I recommend you to follow the steps described here: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc

I’m running here in my laptop the Vagrant option (second option) described in the previous link.

Finally, an ultra simple example can be found of how to use the Kubernetes API to provision in this case our drools-food-services into Openshift.

Notice that we are defining everything at runtime, which is really cool, because we can start from scratch or modify existing Services, Replication Controllers and Pods.

You can take a look at the drools-food-controller project. which shows how we can create a Replication Controller which points to our Docker image and defines 1 replica (one replica by default is created).

https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-controller/src/main/java/org/drools/workshop/drools/food/controller/Main.java

If you log in into the Openshift Console you will be able to see the newly created service with the Replication Controller and just one replica of our Pod. By using the UI (or the APIs, changing the Main class) we can provision more replicas, as many as we need. The Kubernetes Service will make sure to load balance between the different pod instances.

Voila! Our Services Replicas are up and running!
Voila! Our Services Replicas are up and running!

Now if you access the NodeStat service by doing a GET to the mapped Kubernetes Service Port you will get the Pod that is answering you that request. If you execute the request multiple times you should be able to see that the Round Robin strategy is kicking in.

wget http://localhost:9999/api/node {“node“:”drools-controller-8tmby“,”version”:”version 1″}

wget http://localhost:9999/api/node {“node“:”drools-controller-k9gym“,”version”:”version 1″}

wget http://localhost:9999/api/node {“node“:”drools-controller-pzqlu“,”version”:”version 1″}

wget http://localhost:9999/api/node {“node“:”drools-controller-8tmby“,”version”:”version 1″}

In the same way you can interact with the Statless Sessions in each of these 3 Pods. In such case, you don’t really need to know which Pod is answering your request, you just need to get the job done by any of them.

 

Update from (4/4/2016): 

I’ve switched to use the Openshift All-In-One VM from here:  https://www.openshift.org/vm/ which definitely reduces the amount of configuration needed to get started. First of all it reduces the security configurations required to work with it if you install it from the master repository. Secondly, the previous link show how to keep the installation updated. Finally, the only trick that I need to do in order to community with my service instance is to: 

  • Here is the console: https://10.2.2.2:8443
  • You need to download the oc client tools from the https://www.openshift.org/vm/ and then do oc login
  • Then in order to expose the drools-food-service to your host machine you need to:
    • vagrant ssh
    • oc get services (look for something like this) 

      droolsservice     172.30.232.92    <none>        80/TCP                    app=drools                59m

    • exit (to get of of vagrant)
    • vagrant ssh — -L 9999:172.30.232.92:80, where you take the IP from the oc get services, this will expose on the port 9999 the service that is in the 80 port of your vm

Summing up

By leveraging the Openshift origin infrastructure we manage to simplify our architecture by not reinventing mechanisms that already exists in tools such as Kubernetes & Docker. On following posts I will be writing about some other nice advantages of using this infrastructure such as roll ups to upgrade the version of our services, adding security and Api Management to the mix.

If you have questions about this approach please share your thoughts.

Advertisements

35 thoughts on “High Availability Drools Stateless Service in Openshift Origin”

  1. I am trying to use the Kubernetes API to provision the drools-food-services into Openshift getting this exception:

    Exception in thread “main” io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/replicationcontrollers/drools-controller. Cause: kubernetes.default.svc: unknown error
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:283)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:207)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:198)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:506)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:114)
    at org.drools.workshop.drools.food.controller.Main.main(Main.java:39)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
    Caused by: java.net.UnknownHostException: kubernetes.default.svc: unknown error
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
    at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
    at java.net.InetAddress.getAllByName(InetAddress.java:1192)
    at java.net.InetAddress.getAllByName(InetAddress.java:1126)
    at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
    at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
    at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
    at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
    at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
    at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
    at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
    at com.squareup.okhttp.Call.getResponse(Call.java:286)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
    at com.squareup.okhttp.Call.execute(Call.java:80)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:205)
    … 9 more

    What am i missing here…

    Like

  2. Adding following to the client:

    String master = “https://localhost:8443/”;
    Config config = new ConfigBuilder().withMasterUrl(master).build();
    DefaultKubernetesClient kube = new DefaultKubernetesClient(config);

    Now i have a SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    Like

    1. In detail:

      {
      “kind”: “Status”,
      “apiVersion”: “v1”,
      “metadata”: {},
      “status”: “Failure”,
      “message”: “User \”system:anonymous\” cannot get replicationcontrollers in project \”default\””,
      “reason”: “Forbidden”,
      “details”: {
      “name”: “drools-controller”,
      “kind”: “replicationcontrollers”
      },
      “code”: 403
      }

      Like

      1. Ok, perfect you are hitting a security issue in connecting with open shift from your host machine. You will need to do the following:
        1) download openshift client (oc) in your host machine and connect to openshift doing oc login
        2) If that works, then (in a new terminal) using vagrant ssh to login to the vagrant image that is running openshift you should run this:
        oadm policy add-cluster-role-to-user cluster-admin admin –config=openshift.local.config/master/admin.kubeconfig

        As far as I know, all these configurations are needed mostly because of the security mechanisms in openshift and kubernetes. In my todo list I have an entry to test this open shift image (https://www.openshift.org/vm/), that it seems to have all these configurations already taken care for us.

        Like

  3. OC Client lets me bypass the certificate check:

    Server [https://localhost:8443]:
    The server uses a certificate signed by an unknown authority.
    You can bypass the certificate check, but any data you send to the server could be intercepted by others.
    Use insecure connections? (y/n): y

    I had to do:
    chmod 666 openshift.local.config/master/admin.kubeconfig
    Before:
    oadm policy add-cluster-role-to-user cluster-admin admin –config=openshift.local.config/master/admin.kubeconfig

    But now: Message: Forbidden! User null doesn’t have permission..

    Like

  4. Added:

    Config config = new ConfigBuilder().withMasterUrl(master).withPassword(“admin”).withUsername(“admin”).build()

    Same result

    Like

    1. Typo:

      Config config = new ConfigBuilder().withMasterUrl(master).withPassword(“cluster-admin”).withUsername(“cluster-admin”).build();

      Like

  5. Just try to send a user and password with the Kubernetes client config and see if they end up as NULL in the final request.

    Like

    1. That’s because kubernetes takes the environmental variables to auto configure the client. By logging in into open shift with oc login in your host machine you are giving it access to that configuration, so that’s probably why your API configuration is not taken into account.

      Like

      1. I did a complete new install of my system using the new developer RHEL 7.2 edition and now i can deploy your service on the 1.1.4 all in one Virtual Machine after a OC Login from the host.
        But i cannot do a wget to see the round robin effect. In the Openshift overview there is a Kubernetes service registered with no pods. Is this correct and how do i get the mapped Kubernetes Service Port?

        Like

    1. Yeah it will be great. We are open for contrubutions, so feel free to get in touch if you can contribute with those extensions for different cloud providers

      Like

  6. @Maurice,
    can you describe the steps that you are executing? Because I did a fresh installation here and I didn’t hit any problem with the open shift vm. When you open the console, what do you see exactly? When you do vagrant ssh and then oc get services what do you get?

    Like

    1. [mbetzel@localhost openshift]$ vagrant ssh
      /home/mbetzel/.vagrant.d/boxes/thesteve0-VAGRANTSLASH-openshift-origin/1.1.4/virtualbox/include/_Vagrantfile:5: warning: already initialized constant VAGRANTFILE_API_VERSION
      /home/mbetzel/Development/openshift/Vagrantfile:5: warning: previous definition of VAGRANTFILE_API_VERSION was here
      Last login: Mon Apr 4 15:05:42 2016 from 10.0.2.2
      [vagrant@origin ~]$ oc login
      Authentication required for https://10.2.2.2:8443 (openshift)
      Username: admin
      Password:
      Login successful.

      You have access to the following projects and can switch between them with ‘oc project ‘:

      * default (current)
      * openshift
      * openshift-infra
      * sample

      Using project “default”.
      [vagrant@origin ~]$ oc get .
      ./ .bash_history .bash_profile .kube/ .ssh/
      ../ .bash_logout .bashrc .pki/
      [vagrant@origin ~]$ oc get services
      NAME CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR AGE
      docker-registry 172.30.210.155 5000/TCP docker-registry=default 20d
      droolsservice 172.30.130.7 80/TCP app=drools 1h
      kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 20d
      router 172.30.168.56 80/TCP,443/TCP,1936/TCP router=router 20d

      Like

    1. That should be it to be honest, I don’t see any other issue happening, then creating other services, pods and RC, should be fairly easy as well as to change the docker images containing the services. 🙂 Enjoy
      PS: I’m working in expanding the example a little bit further to have a stateful session as well

      Like

    1. Most of the time you have a set of public facing services that needs to be exposed, but internally you can communicate pods directly by using System (env) variables to link the pods together.

      Like

  7. A lot of reading to do. I will expand this example to take the whole KIE Server war with my Thrift Extension. Thanks for your help.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s