In the previous post of the series, I described how messages can be transmitted from a sender application to intended receiver applications through Messaging Channels, and how the Azure Integration Services can be leveraged to implement these Enterprise Integration Patterns. Messaging Endpoints are the application touch points that abstract the application internals and can be used to either receive/extract messages from a source application or send messages to a target application when building integration solutions. In this post, I describe the Messaging Endpoint patterns and how we can utilise the Azure Integration Services to implement them.
This post is a part of a series describing how to implement the Enterprise Integration Patterns using the Azure Integration Services:
- Introduction
- Message Construction
- Messaging Channels
- Messaging Endpoints (this)
- Message Routing
- Message Transformation
- Platform Management
The remaining posts will be published in the following weeks/months. Those patterns highlighted with an asterisk (*) are those that are not described in the original Enterprise Integration Patterns book, but that I suggest considering.
The patterns covered in this article are listed below.
- Application Adapter (*)
- Messaging Mapper
- Messaging Gateway
- Transactional Client
- Polling Consumer (Message Pull)
- Event-Driven Consumer (Message Push)
- Competing Consumers
- Throttled Consumer (*)
- Singleton Consumer (*)
- Selective Consumer
- Message Dispatcher
- Durable Subscriber
- Idempotent Receiver
- Stale Message (*)
- Service Window (*)
- Service Activator
Application Adapter (*)
In my previous post, we explored the Channel Adapter, which abstracts the complexities of a Messaging Channel, so that applications can connect to it more easily. The Azure Integration Services offer Application Adapters (*) or connectors that abstract applications from the integration solutions so that messages can be received, pulled or delivered to different applications with no code required. This pattern is not originally described in the Enterprise Integration Patterns, however, it’s common in modern integration platforms.
Implementation
Logic Apps provide a growing list of connectors that allow you to connect to many different applications without writing code. Some of these connectors allow applications to trigger Logic Apps workflows, other connectors enabling pulling messages from the application, and many of these connectors allow you to deliver messages to the receiver application. Many of these connectors interact with the application’s exposed APIs. However, in other cases, we might need to bridge a messaging channel into a file-based protocol, due to the lack of a connector or API on the application side. |
Messaging Mapper
A Messaging Mapper is in charge of serialising an object on the sender application into a message that can be sent to the Messaging Channel and vice versa on the receiver application. This message could be structured as a JSON object, as an XML message, or even as a proprietary format, such as an IDoc in SAP. A Messaging Mapper is required when the application data is not stored in a way that can be directly sent through a Messaging Channel. For instance, when the data is stored in a relational database, or in a hierarchy of objects. These data must be translated into a message that can be transmitted through a channel. In some cases, particularly in cloud-native applications where data is stored in document-based databases, this mapping might not be required.
Implementation
A Messaging Mapper is usually implemented at the application API. The Application API is aware of the Business Objects and can expose those business objects as domain-specific self-contained messages. Additionally, as mentioned in the section above, a Logic App connector (Application Adapter (*)) can abstract the complexities of the application API so that a Messaging Mapper results very easy to implement on an Enterprise Application Integration solution. |
Messaging Gateway
A Messaging Gateway encapsulates internal objects and exposes domain-specific methods or messages. It usually implements the Messaging Mapper described above exposing different methods.
Implementation
Usually, this abstraction is on the application’s API, either sender or receiver. This layer exposes the application data and functionality in an abstracted way (e.g. Salesforce APIs). |
Transactional Client
A Transactional Client is the implementation of boundaries of an atomic transaction so that messages are not lost in the process of being sent or received. The transaction scope can also go further, for instance:
- Database to Message: so that the object is not marked as processed or deleted on the sender application’s database until it has been successfully sent to the Messaging Channel.
- Message to Database: so that the message is not deleted from the Messaging Channel until it has been committed to the target Database in the receiver application.
- Message Groups: In this case, a transaction won’t be completed until the whole set of messages are received. Otherwise, it should not be committed. This approach could be required when implementing the Message Sequence
- Receive-Send Message Pairs: in this approach, a message being received cannot be completed, until a correlated message is sent. This approach might be relevant when implementing the Request-Reply pattern on the receiver application.
- Send-Receive Message Pairs: in this scenario, a message being sent cannot be completed, until a correlated message is received. This approach could be required when implementing the Request-Reply pattern on the sender application.
In some cases, the sender or receiver application does not support transactions. When transactions are not supported, a Process Manager can be implemented so a compensation operation is performed. You can find more information on this pattern in the Microsoft docs.
Implementation
As described above, there are different types of transactions that can be implemented on a message-based enterprise integration solution. Depending on the type, a different implementation would be required.
|
Polling Consumer (Message Pull)
The Polling Consumer pattern is applied when the message is pulled to be processed. This pattern can be implemented on both, the sender and the receiver application sides.
- Polling Consumer on the Receiver Application. In this pattern, the receiver application controls when to consume a message from the Messaging Channel by checking if messages are available. This pattern fits well with the Push-Pull Channel (*).
- Polling Consumer on the Sender Application. This pattern is implemented when the messages cannot be pushed from the sender application to the Messaging Channel. However, an Application Adapter (*) can poll the sender application for new messages and then send them to the corresponding Messaging Channel.
Depending on the capabilities and flexibility of the application and the Channel Adapter or the Application Adapter (*), a Polling Consumer can be stateful or stateless, as described below.
- Stateless Polling Consumer: In this approach, the consumer cannot or does not need to keep a state for polling. For instance, a Polling Consumer that gets all available records from a table, and once a message has been put into the channel, deletes it from that table or marks it as processed. Usually, this happens within a transaction. The advantage of this approach is that it does not require the Polling Consumer to keep a state. However, a disadvantage is that it requires the sender application to keep a state concerning the integration process; which is not always possible or ideal.
- Stateful Polling Consumer: In this pattern, the consumer keeps a state for polling. For instance, keeping a polling watermark which indicates the last time a poll was performed. This way, the next poll only gets messages which were created or updated after that polling watermark. The main advantage of this approach is that it does not require the sender application to keep a status concerning to the integration process. However, it requires the Polling Consumer to keep a state. Additionally, for highly available solutions, this state must be maintained in a distributed way.
Implementation
Some Logic App adapters support the Stateless Polling Consumer pattern. For instance, the Service Bus Trigger is able to receive all available messages and complete them once they have been processed successfully. Likewise, some adapters support the Stateful Polling Consumer pattern. The SQL Connector, for instance, can poll for inserts or updates on a SQL Table without requiring to keep an integration state on the source table. It does this by keeping a trigger state that contains either the RecordId for inserts or the RowVersion for updates. Additionally, the file connector can be triggered when a new file is dropped into a folder. And because the connector does not delete the files as part of the trigger step, the workflow must keep a trigger state (LastModifiedDate of most recently updated file), so when a new poll occurs, only new or updated files trigger the workflow. The Stateful Polling Consumer pattern is described in detail in this post. Some Logic App adapters support both the Stateful and Stateless approach. For instance, the SQL Connector could also call a stored procedure that gets records and either delete them or update them after a successful processing. Likewise, when using the File Connector, we can have a workflow which is triggered recurrently, with an action to get all available files in the folder, and once the files have been processed successfully, we can delete them from the folder; so that the next instance won’t process them again (destructive read). |
Event-Driven Consumer (Message Push)
In some scenarios, the source is able to push messages when an event occurs. Similarly to the previous pattern, the Event-Driven Consumer can be implemented on the sender and the receiver application.
- Event-Driven Consumer on the Receiver Application side: This pattern can be applied when the receiver application is open to accept messages from the Messaging Channel directly without requiring to poll.
- Event-Driven Consumer on the Sender Application side: This pattern can be applied when a Messaging Channel can receive messages directly.
Implementation
On the receiver application side, this pattern can be implemented using Event Grid. The receiver application must expose a HTTP API that can receive events |
|
On the sender application side, this pattern can be implemented using Event Grid or Service Bus. Both accept messages being pushed via a HTTP request or the SDK. |
Polling Consumer vs Event-Driven Consumer
The table below shows some differences between the Polling Consumer and the Event-Driven Consumer patterns
|
Polling Consumer |
Event-Driven Consumer |
Trigger |
The target (consumer) triggers the action by polling, usually based on a schedule. |
The source triggers the action by pushing the message when an event occurs. |
Durability |
Messages can be stored on the source until the target is ready to consume them. |
Messages can get lost if the intended receiver is not reachable and retries get exhausted. |
Timeliness |
Depending on the schedule, there could be a delay between a message is emitted and consumed. |
Usually, messages can be processed in near real-time. |
Ordering |
Under certain conditions, polling could allow in-order processing. |
In-order processing is usually not supported. |
Cost Effectiveness |
When trying to poll frequently to minimise delays, polling can result in a waste of resources |
Processing is only required when messages are available. |
Throttling or Concurrency Control |
The polling consumer can dictate the frequency and volume of messages to pull. |
Throttling without the risk of losing messages can be harder to implement. |
Competing Consumers
When consuming messages from a queue, having one single processing worker may result in messages being piled up on the channel, which can impact the throughput. Having Competing Consumers processing messages in parallel from one channel allows reducing bottlenecks and increasing throughput. Having multiple consumers working in parallel might bring the risk of processing the same message multiple times. To avoid this, two approaches can be used:
- Destructive Read: this means that once a message is grabbed by one of the consumers, the message is immediately removed from the channel. This approach is very simple, however, not very resilient. Messages could be lost if a failure occurs.
- Non-destructive Read: to minimise the risk of losing messages in the case of failures, a message can be kept locked in the channel after being grabbed, so no other consumers can take it. Then, the message can be removed once it has been successfully processed, the lock can be renewed if the processing is taking longer than the lock duration, or the message can be unlocked if a transient failure occurs so it can be processed again by any of the Competing Consumers.
Implementation
Logic Apps implements the Competing Consumer pattern with Service Bus out-of-the-box. By default, the workflow will scale-out to multiple instances to process multiple messages in parallel. Logic Apps allow you to implement a destructive read approach with the Auto-complete Service Bus Trigger, and the non-destructive read with the Peek-lock Service Bus trigger. |
|
|
Azure Functions also implement the Competing Consumers pattern with Service Bus out-of-the-box. By default, the function will scale-out to multiple instances to process multiple messages in parallel. |
Throttled Consumer (*)
In cloud-native integration solutions, having Competing Consumers is almost always the norm. On Azure Integration Services, Logic Apps or Azure Functions will promptly scale out based on the number or incoming messages. However, in some cases, we might want to have a limited number of instances of our processing workers consuming the available messages. This requirement is common when we need to avoid hammering a receiver application with more requests than this can handle. The Throttled Consumer (*) pattern is not described in the original book of Enterprise Integration Patterns; however, it can solve common challenges when implementing Enterprise Integration Solutions. You can find more information on this pattern in the Microsoft docs.
Implementation
Azure Logic Apps allows you to have concurrency control so that you can limit them to have only a limited number of instances of a workflow running at the same time. Other messages will be waiting until the running instance completes its execution.
|
|
At the time of writing, Azure Functions does support controlling the degree of parallelism but with some limitations, as described in the following posts: (1), (2), and (3). |
Singleton Consumer (*)
A type of the Throttled Consumer pattern is required when the receiver application must or can only handle one message at a time. In this case, we need to implement a Singleton Consumer (*). The Singleton Consumer (*) might be required in different scenarios.
- In-order processing: If we know that messages in a queue are sequenced and we need to process them in that order, we will need to make sure that only one worker is processing those messages at a certain time. However, relying only on the first-in first-out (FIFO) approach on queues has limitations for ordered processing. For instance, we need to make sure that partitions are aligned with the sequencing requirements, and we must be aware that the head-of-queue message can block the whole queue if its processing is taking long, especially when using retries.
- Processing limitation: Some applications can only handle one message at a time. This might not be true for cloud applications, but can be the case in legacy applications.
Implementation
|
Azure Logic Apps allows you to have concurrency control so that you can limit them to have only one instance of a workflow running at the same time. Other messages will be waiting until the running instance completes its execution. |
|
At the time of writing, Azure Functions does support controlling the degree of parallelism but with some limitations, as described in the following posts: (1), (2), and (3). |
Selective Consumer
The Selective Consumer pattern can be implemented when the receiver application does not want to consume all messages available on the Messaging Channel. Usually, properties in the Message Header (*) are used by the Selective Consumer to determine whether the message is relevant to that consumer.
Implementation
|
In Service Bus, Topics with Subscriptions can be used. The selective rules are implemented at the subscription using Message Header properties. In a Publish-Subscriber Channel pattern, subscriptions would be used to identify the messages intended for receiver applications. |
|
Azure Event Grid also provides topics for Event Messages that allow directing messages in a selective manner to different consumers. |
Message Dispatcher
A Message Dispatcher can be required when the Publish-Subscribe Channel, Datatype Channel, or Selective Consumer patterns cannot be implemented or are not sufficient to distribute and route the messages to the intended receivers. The Message Dispatcher acts as a coordinator to implement advanced rules to send messages to the consumers.
Implementation
|
When Service Bus topics subscriptions or Event Grid topics are not sufficient to direct messages to the corresponding consumers, a Message Dispatcher can be implemented as a Logic App workflow. Additionally, when there is a requirement to implement the dispatching business rules to identify the receiver in a decoupled way from the Logic App workflow, these can be defined using Liquid Templates as described in this post. |
Durable Subscriber
The Durable Subscriber pattern allows that messages in a Messaging Channel are not lost when the intended receiver is not available, by persisting the messages.
Implementation
Service Bus provides durability so that receiver applications can pull messages when they are back online. The Message TimeToLive property is to be set accordingly. |
|
Event Grid also provides durability for messages. It implements Guaranteed Delivery (push) but within a limit of 24 hours. |
Idempotent Receiver
The Guaranteed Delivery pattern is meant to resubmit a message when they were not delivered successfully. However, it is possible that in some scenarios the message is received by the receiver successfully, but the corresponding acknowledgement is not received or processed on the sender side. This would lead to the submission of a redundant message. Given that possibility, particularly in distributed systems when retries may happen in many different layers, the ability to discard duplicate messages is crucial. The Idempotent Receiver pattern allows a receiver to safely receive the same message multiple times without undesired side effects.
This approach requires that each message has a unique identifier and that either the Messaging Channel or the receiver application keeps a log of the identifiers of the messages previously processed; so that if a duplicate message arrives, it can be safely discarded.
Implementation
|
Service Bus supports deduplication based on the MessageId property. The duplicate detection history can be kept for up to seven days. When enabling deduplication, we need to consider scenarios where we purposely need to resubmit a message to the channel with the same identifier, which would require us to create a new identifier for the resubmission. |
When not using Service Bus, we would need to implement our own log and validation. |
Stale Message (*)
In many scenarios deduplication works well. However, there are other scenarios where deduplication is not enough, particularly when we are processing Document Messages that contain the full state of an entity and in-sequence processing is important, but the Messaging Channel cannot guarantee in-order delivery. Think of an Employee entity that has two updates, one after the other; and both messages have a full snapshot of the Employee entity. If we only applied deduplication and messages arrived out-of-sequence, the receiver application would process both messages in the wrong order and the final state of the employee would be out-of-date.
When the full state of the entity is contained in the Document Message, and the full change history is not required on the receiver application, processing every single message for a particular entity is not always required. In these scenarios, where eventual consistency per entity is preferred over the processing of every single entity event, we could implement the Stale Message (*) pattern to identify whether a message received is newer than the persisted state in the receiver application. A Stale Message (*) is a message that is received and does not contain newer information for a particular entity. This can happen because the message was received out-of-sequence, or because the message has been processed successfully previously and received again due to a redundant retry. This pattern was not described in the Enterprise Integration Patterns book, however, it can be used to solve the challenges described above.
I am suggesting to add this pattern to the Messaging Endpoints group, due to its relation to the Idempotent Receiver pattern; however, it could well be classified as a Message Routing pattern, given that it is also related to the Resequencer and Message Validation (*) patterns.
Implementation
There is no built-in implementation of the Stale Message (*) pattern on the Azure Integration Services. It’s implementation requires
One important consideration is that ideally, the validation to check whether the message is stale and the persistence of the message should happen within a transaction. Otherwise, there could be a race condition where more than one message for the same entity are running in parallel; which could cause undesired side effects. |
Service Window (*)
When integrating with legacy applications, sometimes we know that they are not available or reachable at certain times of the day or of the week due to maintenance, or because they are overloaded due to scheduled batch jobs. The Service Window (*) pattern allows us to configure our integration solution so that it does not try to reach the application during those periods. This helps us to avoid getting false positive alerts of unreachable endpoints or overloading even more the busy application with requests coming from the Messaging Channel. This pattern is not described in the Enterprise Integration Patterns book; however, I believe that it should be considered in some scenarios.
Implementation
Ideally, the Service Window (*) pattern should be implemented at the Application Adapter (*). However, this is not offered by Logic Apps connectors. One way to implement a Service Window (*) would be to trigger the Logic App in charge of connecting to the application using the Recurrence Trigger and defining the days and the hours the workflow is meant to run. However, this approach has limitations. For instance, we wouldn’t be able to use the many other built-in triggers. |
Service Activator
In some scenarios, we want to expose the same service on an application not only via a synchronous request but also via messaging in an asynchronous manner. The Service Activator pattern describes that we can expose those synchronous services via a Messaging Channel.
Implementation
We can wrap Http endpoints using a Logic App that is triggered by messages coming from Service Bus. If the service is two-way, we can then return the response with a Correlation Identifier to the original requestor via another Service Bus queue or topic. |
Wrapping up
In this post, we have covered the Messaging Endpoints patterns and how to leverage the Azure Integration Services to implement them. As we have discussed previously, the technology is evolving and some of these patterns are built-in features of the Azure service offerings. However, others require a custom implementation on top of them. Understanding the Messaging Endpoint patterns allows us to consider different approaches to connect to the applications when architecting integration solutions. I hope you have found this post useful. Stay tuned to the next instalment of the series about the Message Routing Patterns.
Happy integration!
Cross-posted on Paco’s Blog
Follow Paco on @pacodelacruz