In the previous post of the series, I covered how application data are to be serialised and packaged into messages so they can be transmitted between applications. In this post, I’ll describe the Messaging Channels patterns, which focus on solving the challenges of transmitting messages from a sender application to the intended receiver applications; and how these patterns can be implemented on the Azure Integration Services. Those marked with an asterisk (*) are those which are not described in the original book, but I suggest to consider.
Messaging Channels can be defined statically or dynamically. Depending on different factors, different types of channels can be used. The different channels are described in the following sections.
This is a part of a series describing how to implement the Enterprise Integration Patterns using the Azure Integration Services:
- Introduction
- Message Construction
- Messaging Channels (this)
- Messaging Endpoints
- Message Routing
- Message Transformation
- Platform Management
The remaining posts will be published in the following weeks/months.
The patterns covered in this article are listed below.
- Point-to-Point Channel
- Publish-Subscribe Channel
- Push-Pull Channel (*)
- Push-Push Channel (*)
- Datatype Channel
- Invalid Message Channel
- Dead Letter Channel
- Guaranteed Delivery
- Circuit Breaker (*)
- Channel Adapter
- Messaging Bridge
- Message Bus
Point-to-Point Channel
The Point-to-Point Channel is utilised when the sender is aware of the one receiver that is meant to receive and process the message. It’s common that Command Messages or Query Messages, which are tightly coupled to the receiver system require a Point-to-Point Channel.
Implementation:
|
A Point-to-Point channel can be implemented using Azure Service Bus Queues. A Queue is meant to have only one consumer. Thus it’s common that the sender application that drops the message into the queue is aware of the receiver. |
Publish-Subscribe Channel
While the Point-to-Point Channel can be used to send messages to only one consumer, the Publish-Subscribe Channel allows to send one message to all available interested receivers (subscribers). In this pattern, the sender does not need to be aware of who is subscribing to the messages in the channel or if there are any active subscribers. The Publish-Subscribe Channel abstracts that from the sender. However, the channel must be aware of the subscribers, so that it can create a copy of every message for each of the active subscribers.
Implementation
|
Azure Service Bus Topics and Subscriptions can be used as a Publish-Subscribe Channel to transmit Document Messages. The sender drops the message into the topic, and a subscription is created for each subscriber. |
|
For Event Messages, Azure Event Grid topics can be used. Similarly, a subscription is created for each interested receiver. |
Push-Pull Channel (*)
A Push-Pull Channel (*) is a messaging channel which cannot push messages to the intended receiver applications. Thus, it is up to the receiver application to pull the messages from the channel. This type of channel fits well with the Polling Consumer. This pattern is not originally described in the book, but I believe it is important to be aware of this type of channels when designing message-based enterprise integration solutions.
|
Azure Service Bus Queues and Topics require the receiver applications to pull messages from them. |
Push-Push Channel (*)
A Push-Push Channel (*) is a messaging channel which is able to push messages to the intended receiver applications. Thus, the receiver application must be able to receive the messages and be available. This Messaging Channel pattern fits well with the Event-Driven Consumer. Additionally, Push-Push Channels are ideal for Event Messages. This pattern is not originally described in the book, but I believe it is important to be aware of it when designing message-based enterprise integration solutions.
|
Azure Event Grid is a Push-Push Channel ideal for Event Messages. Given that Event Grid pushes the messages to the intended receiver application, the application must be available. To support transient failures and solution resiliency, Event Grid offers configurable retries and dead-lettering. |
Datatype Channel
The Datatype Channel suggests that, while designing and implementing our channels, we should aim to have one channel per Message Type. So that the receiver knows how to process the message without complex rules.
Implementation
|
On Service Bus, it is a good practice to use separate queues or topics for different message types (datatypes). |
|
On Event Grid, it is recommended to have different topics for different event types (data types). |
Invalid Message Channel
When implementing messaging systems, we shouldn’t expect messages to be valid all the time. There can be many different reasons why a message in a Messaging Channel is not valid; from missing required values, to invalid values, to an invalid message structure, to invalid message types. In all these cases, there is no point on retrying the processing of the message, as it will keep failing the validations. The Invalid Message Channel is a channel to which a messaging channel or the receiver application can deliver those invalid messages in a graceful manner without losing them.
Implementation
|
In Service Bus, there is a Dead-Letter sub-queue under queues or subscriptions, but not a separate Invalid Message sub-queue. However, we can make use of the DeadLetterReason and DeadLetterErrorDescription fields to specify the cause of sending the message to the sub-queue. Given that Service Bus does not provide Message Validation (*), this must occur on the receiver application, and that application would be responsible to move the message to the corresponding sub-queue while specifying the reason. |
|
When Event-Grid tries to deliver an event and receives an Http 400 (Bad Request) or an Http 413 (Request Entity Too Large) response code from the receiver, it immediately sends the event to the dead-letter endpoint. The Message Validation (*) must occur on the receiver application, and that application must return the corresponding Http status code to Event Grid. At the time of writing, Event Grid only supports to write invalid messages to a blob, which cannot be considered a proper messaging channel. So it is up to the solution or the administrator to pull those events to resolve deliveries. |
Dead Letter Channel
In the previous pattern, we described how to handle messages that cannot be processed when the message is invalid. But, what if the message is valid but cannot be delivered at all to the intended receiver? This could happen for various reasons, from the receiver being unavailable, to a communication failure to the receiver, to not being able to identify the receiver, to a message being expired. When the channel cannot deliver the message, there should be a graceful way to remove the message from the queue or topic subscription and leave it somewhere else for a different type of processing or troubleshooting. This is the purpose of a Dead Letter Channel.
Implementation
|
As mentioned above, Azure Service Bus provides a dead letter sub-queue and sub-subscriptions so messages that cannot be delivered can be moved there. Messages can also be moved to the dead letter if their TimeToLive has exceeded or when all retries (MaxDeliveryCount) have been exhausted. |
|
Event Grid supports holding events that can’t be delivered to the receiver application to a dead letter location. At the time of writing, Event Grid only supports to write invalid messages to a blob, which cannot be considered a messaging channel. So it is up to the solution or the administrator to pull those events to resolve deliveries. |
Guaranteed Delivery
Asynchronous messaging allows decoupling in time the sender from the receivers. When the receiver is not available or reachable through the network, the messaging system can store the message and retry delivering the message until the receiver is reachable or available. To do so, the messaging channel must not only have the message in memory but persist it on disk so the message can survive a crash in the channel. This behaviour is described by the Guaranteed Delivery pattern.
Implementation
|
Event Grid supports retries to delivery events to the receiver applications. It implements exponential back off policy to avoid overwhelming unhealthy endpoints or minimise traffic when the endpoint is down for a long period. However, at the time of writing, it only retries up to 24 hours. |
|
Logic Apps and Service Bus can be used together to implement advanced options for guaranteed delivery. A Logic App workflow can be in charge of delivering the message, while Service Bus can persist the message until it is successfully delivered. On Logic Apps a Peek-Lock can be implemented on Service Bus messages. This means that a message is kept in the queue but locked until one of the following conditions:
Currently, the LockTime can be set to up to five minutes. That means, that the workflow has up to 5 minutes to either complete the message or when more time is required, Renew the Lock. Once the message is abandoned or the lock expires, the message is again available on the queue or topic subscription and any other instance can process it. Every time a lock of a message is released, the DeliveryCount is increased by one, and the message will remain valid until it reaches the MaxDeliveryCount or its ExpiresAtUtc. You can additionally, implement a retry policy on a Logic App action to send the message to the intended receiver. That retry policy has to consider the LockTime. For instance, if a send action times out after 2 minutes, you could only retry twice with an interval of less than a minute or implement a RenewLock between retries. |
|
A similar implementation to the one described above could be done using Azure Functions. An advantage that Azure Functions offers when using the Service Bus Trigger binding, is that the message lock can be renewed while the Function is being executed and the message is not completed. |
Circuit Breaker (*)
The Circuit Breaker pattern is not described in the Enterprise Integration Patterns Book, but made popular by Michael Nygard in his book Release It! This pattern can prevent overwhelming downstream systems with messages that are likely to fail due to the unavailability or unhealthy status of the downstream system. By implementing this pattern, the solution should be able to detect failures on the receiver application and stop sending messages when they are unlikely to be processed successfully. At the same time, the solution should detect when the fault has been resolved so that it can resume sending the queued messages.
Implementation
|
This pattern is not a built-in capability of any of the Azure Integration Services. However, Jeff Hollan, Program Manager of the Azure Functions team, has described a way to implement this pattern using Logic Apps and Azure Functions. The solution described in his post relies on Redis cache as a shared state across all instances (Competing Consumers) processing the messages, and a Logic App as a Process Manager that can manage the state of the circuit (open or closed). The solution described uses Event Hubs, but it can easily be implemented with Azure Service Bus.
|
Channel Adapter
A Channel Adapter is a simplified interface to the Messaging Channel that can be used by sender or receiver applications to connect to the messaging system.
|
Channel Adapters are meant to connect a particular application with the messaging system. The best way to do it in the Azure Integration Services when there is a Logic App Application Adapter (connector) available for the required application is to implement a Logic App that connects the application to Service Bus or Event Grid using the connectors. |
Service Bus provides different client libraries for different programming languages, like .NET, Java, Javascript, Go, and Python. These can be used when we can modify the applications participating in our solution using any of these programming languages. Service Bus also provides integration with D365 |
|
Event Grid also provides different client libraries, like .NET, Go, Java, Node, Python and Ruby. |
Messaging Bridge
In some scenarios, enterprises use more than one messaging system in the same solution. A Messaging Bridge can connect different messages channels reliably to move the messages from one channel to the other.
Implementation
|
Some companies use Azure Service Bus as a Messaging Channel on Azure and IBM MQ on premises. Logic Apps provide a connector for both IBM MQ and Service Bus. You can use a Logic Apps and these two connectors to create a messaging bridge and connect both messaging channels. Other queuing Messaging Channels, such as Apache ActiveMQ, RabbitMQ, or Apache Qpid use a standard protocol call AMQP. Unfortunately, at the time of writing, Logic Apps does offer an AMQP connector which could bridge from one AMQP broker to another or from Service Bus to an AMQP broker. If you would like this connector to be available, you can upvote it here. |
|
For Event Messages, you can implement events with the CloudEvents schema and forward those events with the canonical metadata schema to other eventing Messaging Channels that support this schema and receive event messages via a http endpoint. |
Message Bus
The Message Bus is a meta-pattern; which includes other messaging patterns, such as Publish-Subscribe Channels, Channel Adapter, Service Activators, Message Routers, Canonical Data Models, etc.; that acts as a middleware to connect multiple sender and receiver applications.
Implementation
|
To implement this meta-pattern on Azure, we need a combination of all the Azure Integration Services, including Logic Apps, Azure Service Bus, Event Grid, Azure Functions, etc. |
Wrapping Up
In this post, I have covered how to implement the Messaging Channel patterns using the Azure Integration Services. Some of these patterns are already out-of-the-box features of the platform, and in other cases we need to create our own implementation. As mentioned previously, being aware of these patterns and the challenges they address, allow us to be much better prepared when architecting message-based enterprise integration solutions.
Happy integration!
Cross-posted on Paco’s Blog
Follow Paco on @pacodelacruz