The ability to control how Mule creates and manages spring application context is a very useful feature. On a recent client engagement, I had to perform an identical integration operation but from a variable number of sources (different for each environment).
In Mule, this can be accomplished by:
- Adding duplicate system specific flows at build time, or
- Creating a template flow, parameterising configuration, and creating an application context instance for each configuration.
I prefer option 2 as it is DRY and lets you easily control the number of flows from configuration not code.
As a use case, let us assume we have business partners placing files in S3 buckets, where we pick them up and move them to a separate S3 location. Let us now say that we have a variable number of partners and therefore a variable number of source S3 buckets.
On-boarding new partners would mean that source and destination buckets would differ, and probably polling frequency as well but rest should remain the same. This can be achieved by:
- adding a partner specific configuration, and then
- programmatically creating an application context from a template flow
Building our application
A template flow is required so we can parameterise it and change endpoints. Let’s use following flow and change source/destination buckets, destination folder and polling frequency.
<flow>
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="20" timeUnit="SECONDS"/>
<logger message="Flow Start: " level="INFO" doc:name="Log Start"/>
</poll>
<s3:list-objects config-ref="Amazon_S3_GlobalConnector" bucketName="test-partner-1" doc:name="List objects"/>
<foreach collection="#[payload]" doc:name="For Each">
<enricher doc:name="Message Enricher" target="#[flowVars['copyObjectResult']]">
<s3:copy-object config-ref="Amazon_S3_GlobalConnector" destinationBucketName="test-system-1" destinationKey="#[payload.getKey()]" sourceBucketName="#[payload.getBucketName()]" sourceKey="#[payload.getKey()]" doc:name="Copy object"/>
</enricher>
<s3:delete-object config-ref="Amazon_S3_GlobalConnector" bucketName="test-partner-1" key="#[payload.getKey()]" doc:name="Delete object"/>
</foreach>
<logger message="Flow End" level="INFO" doc:name="Log Start"/>
</flow>
Tapping into Mule application lifecycle phases would give us the oppurtunity to read a properties file and start an application context with partner specific details. A java class that implements org.mule.api.lifecycle.Initialisable and org.mule.api.lifecycle.Disposable would let us do just that.
- During initialise phase, we change partner specific details in our template flow and start an application context.
- In dispose phase, we stop and dispose our application context(s)
public void initialise() throws InitialisationException {
// read template flow
// replace endpoints with properties from configuration file
// create and start an application context
}
public void dispose() {
// stop and dispose application context(s)
}
And finally a main flow , that gives us access to properties file, and kick start our application.
<context:property-placeholder location="mule-app.properties" />
<flow>
<component doc:name="RouteFactory">
<singleton-object class="demo.RouteFactory">
<property key="partnerCount" value="${partner-count}"/>
</singleton-object>
</component>
</flow>
Depending on the number of parameters, properties file may end up look like following
partner-count=2
partner-1.flow-name = partner-one-flow
partner-1.polling-frequency = 10
partner-1.source-bucket = test-partner-1
partner-1.destination-folder = partner1/
partner-1.destination-bucket = test-system-1
partner-2.flow-name = partner-two-flow
partner-2.polling-frequency = 20
partner-2.source-bucket = test-partner-2
partner-2.destination-folder =
partner-2.destination-bucket = test-system-2
RouteFactory.java
Conclusion
The example we just considered is rather simple. For the daring amongst us, we could even experiment with adding an HTTP API that creates and destroys flows for us without a need to restart our application.