Cloud & Engineering

Sohrab Hosseini

Mule on OpenShift: Part 1 - Deployment Models

Posted by Sohrab Hosseini on 28 February 2019

kubernetes, tech, mule, platform, openshift, container, anypoint


At Deloitte Platform Engineering, we have been building containerised platforms for a few years now. Red Hat OpenShift Container Platform has been our weapon of choice in many of these instances, complementing a solid Kubernetes foundation.

A significant part of this work also consists of building and deploying microservices to these platforms. When dealing with system integration-heavy use cases, we found MuleSoft's Anypoint runtime adds mature capabilities to our arsenal.

It was an obvious progression for us to run and operate Anypoint applications on OpenShift. In these series of blogs will discuss some of the lessons we have learnt over the years and across multiple projects as we embarked on this endeavour.

Mule Architecture

First, we need to discuss the anatomy of a Mule application. We will focus on Mule 3.x and 4.x enterprise runtimes for this exercise.

Mule uses a classic application server architecture, where application packages are deployed to the server. If you have ever used Tomcat or JBoss Application Server, you know what I am talking about.

When you start a Mule runtime, you are actually initialising a Tanuki wrapper, which in turn will initialise the Mule Container. Mule Container manages Core Extensions which provide additional functionality to what Mule Server is typically capable of.

Here are some examples of Core Extensions:

  • Deployment Service manages the lifecycle of Mule applications and watches the deployment directory to trigger (un)deployments
  • API Gateway interacts with Anypoint SaaS platform and provides API Gateway capabilities, such as policies, consumer management, analytics, etc. to the application

There are other actors involved in a Mule runtime startup, such as Mule Context and Domains but they are not relevant to our OpenShift deployment discussion.


Mule runtime architecture makes it a perfect fit for deployment to, say, a virtual machine, however our target runtime is a containerised platform, i.e. OpenShift.

When containerising applications, there are some tenants that are followed to maximise the benefit of such deployment models:

  • A container should contain a single functional application
  • A container should run a single-process
  • The application should be already baked into the container image
  • The containerised application's lifecycle should be managed by container daemon and subsequently by the container orchestrator

Having these in mind, some constraints emerge from the current Mule runtime architecture:

  • Mule runtime is designed to run multiple applications
  • Mule runtime controls the lifecycle of the application
  • Tanuki wrapper controls the lifecycle of the runtime itself
  • Some features such as API Gateway are built into the runtime, not the applications

Deployment Models

One thing we all agreed early on was that each container should only run a single Mule application. If this was not the case, then there was little benefit to be gained from containerising and at that point, we should explore other (non-containerised) deployment options.

Next, we needed to consider how we would go about making Mule more container-friendly. Considering the challenges above, three deployment models presented themselves.


Probably the closest approach to running Mule runtime outside a container is to create a container image that consists of the runtime as-is and a single Mule application.


  • Each container represent a single application
  • No modification to Mule runtime is required
  • The application is already baked into the image


  • Tanuki wrapper is managing the runtime lifecycle, not the container orchestrator
  • Mule Container is managing the application lifecycle, not the container orchestrator
  • There is an overhead of running a whole Mule server just to run a single application


We can overcome one of the disadvantages of the previous approach by removing the Tanuki wrapper and running the Mule Container directly.

To achieve this, the container executes a Java process directly. Here are the configurations we used with this option:

  • Main class: org.mule.module.launcher.MuleContainer
  • Classpath, in order:
    • The directory where the Mule licence resides
    • One or more optional directories that contain your application configuration and secrets
    • mule/conf
    • mule/lib/boot/*
    • mule/lib/opt/*
    • mule/lib/mule/*
    • mule/lib/user/*
  • System properties: translated from the Tanuki wrapper configuration file

This approach bears similar advantages/disadvantages to the previous approach with the obvious exception of container orchestrator now being able to at least manage them Mule Container directly.


For the final approach, we mused what would happen if we removed the runtime altogether and create a container image that simply ran the application.

org.mule.MuleServer class is responsible for instantiating the Mule Context directly so using this class as the Main class allowed us to make this a reality. Per the previous approach, the container was configured with the classpath and system properties. Although now we needed to include as well.

To make this container image truly "lite", we also used Maven to gather only the dependencies required by the application during the build process, rather than having the full standalone runtime baked into the image.


  • Mule application lifecycle is directly managed by the container orchestrator
  • No overhead of running the Mule runtime and its services like the deployment service (not particularly useful in a containerised model)
  • Minimal image where only the required libraries are present


  • Features, such as API Gateway, that are built into the runtime and not the application, will not natively work


Above we introduced a sliding scale of deployment models for running containerised Mule applications. Each approach has its own merits and downsides so it really comes down to individual project requirements on which approach to choose.

For example, if the API Gateway is deployed as a separate container or another product altogether, then Mule:Lite could be an option. However if the application architecture requires co-location of API Gateway capabilities with the application themselves, then Mule:Unwrapped may prove a happy medium.

Having read thus far, you may notice that very little content is OpenShift-specific. In actuality, much of what we discussed here can be applied to plain Kubernetes or even entirely another container orchestrator.

In Part 2, we will turn this around and discuss the technical details of how we applied these deployment models to run Mule on OpenShift.


If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!


Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.


Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: