At Deloitte Platform Engineering, we have been building containerised platforms for a few years now. Red Hat OpenShift Container Platform has been our weapon of choice in many of these instances, complementing a solid Kubernetes foundation.
A significant part of this work also consists of building and deploying microservices to these platforms. When dealing with system integration-heavy use cases, we found MuleSoft's Anypoint runtime adds mature capabilities to our arsenal.
It was an obvious progression for us to run and operate Anypoint applications on OpenShift. In these series of blogs will discuss some of the lessons we have learnt over the years and across multiple projects as we embarked on this endeavour.
First, we need to discuss the anatomy of a Mule application. We will focus on Mule 3.x and 4.x enterprise runtimes for this exercise.
Mule uses a classic application server architecture, where application packages are deployed to the server. If you have ever used Tomcat or JBoss Application Server, you know what I am talking about.
When you start a Mule runtime, you are actually initialising a Tanuki wrapper, which in turn will initialise the Mule Container. Mule Container manages Core Extensions which provide additional functionality to what Mule Server is typically capable of.
Here are some examples of Core Extensions:
There are other actors involved in a Mule runtime startup, such as Mule Context and Domains but they are not relevant to our OpenShift deployment discussion.
Mule runtime architecture makes it a perfect fit for deployment to, say, a virtual machine, however our target runtime is a containerised platform, i.e. OpenShift.
When containerising applications, there are some tenants that are followed to maximise the benefit of such deployment models:
Having these in mind, some constraints emerge from the current Mule runtime architecture:
One thing we all agreed early on was that each container should only run a single Mule application. If this was not the case, then there was little benefit to be gained from containerising and at that point, we should explore other (non-containerised) deployment options.
Next, we needed to consider how we would go about making Mule more container-friendly. Considering the challenges above, three deployment models presented themselves.
Probably the closest approach to running Mule runtime outside a container is to create a container image that consists of the runtime as-is and a single Mule application.
Advantages:
Disadvantages:
We can overcome one of the disadvantages of the previous approach by removing the Tanuki wrapper and running the Mule Container directly.
To achieve this, the container executes a Java process directly. Here are the configurations we used with this option:
This approach bears similar advantages/disadvantages to the previous approach with the obvious exception of container orchestrator now being able to at least manage them Mule Container directly.
For the final approach, we mused what would happen if we removed the runtime altogether and create a container image that simply ran the application.
org.mule.MuleServer class is responsible for instantiating the Mule Context directly so using this class as the Main class allowed us to make this a reality. Per the previous approach, the container was configured with the classpath and system properties. Although now we needed to include mule-deploy.properties as well.
To make this container image truly "lite", we also used Maven to gather only the dependencies required by the application during the build process, rather than having the full standalone runtime baked into the image.
Advantages:
Disadvantages:
Above we introduced a sliding scale of deployment models for running containerised Mule applications. Each approach has its own merits and downsides so it really comes down to individual project requirements on which approach to choose.
For example, if the API Gateway is deployed as a separate container or another product altogether, then Mule:Lite could be an option. However if the application architecture requires co-location of API Gateway capabilities with the application themselves, then Mule:Unwrapped may prove a happy medium.
Having read thus far, you may notice that very little content is OpenShift-specific. In actuality, much of what we discussed here can be applied to plain Kubernetes or even entirely another container orchestrator.
In Part 2, we will turn this around and discuss the technical details of how we applied these deployment models to run Mule on OpenShift.