Turning your application on its head

If nothing what you do wil do, then what do you do?
Hisamatsu Shin’ichi (1889-1980)

So I’ve started by telling you that the Way of the Microservice has to do with quality, that this aspect is more important than the obvious aspects of size and deployment, and that trying to be like Netflix is likely to trip you up if you don’t get the quality bit right. Even Gartner, that ultimate fond of business wisdom, warns its customers that “You are not Netflix.” Bummer. So, why are we even going this path? I mean, apart from the lofty goal of delivering high-quality software, which I hope you’re not against. It’s most likely this nagging feeling that you want affordable quality, not quality at any cost.

An important thing you’re going to get from using a microservices architecture, and totally worth the (apperent) trouble, is increased autonomy. If your “application” is built as a group of cooperating small processes, and they adhere to those principles of loose coupling and high cohesion, you gain a lot of flexibility. Since the components are small, making sure they are secure and fault-tolerant is also a lot easier. Luckily there is a fairly simple trick to help you achieve the increased independence: stop building applications that move data around, but model the flow of data first, and let your microservices process or produce them: Implement a Data-Driven Architecture.

I once was asked to analyse a group of applications, which were increasingly causing worries to management. A large group of related Web-based order applications was built using a set of common components, but each new application was causing problems for the others, due to the common components containing data definitions for all entities. As a consequence, adding a new product would cause that new entity to be added to the common components. Not wanting to have different applications with different versions of the same library, this would cause requests for permission to update the other applications as well, with all the added costs of a new round of acceptance tests.

There are all kinds of variations on this theme you can think of, all caused by a common set of data definitions, which are used in interfaces of (public) services. Especially if those interfaces are for SOAP services this is quite a common worry, because of the strict checking generally applied to messages coming in and going out. Even worse, we have grown so accustomed to interface definitions being poured in concrete, that we had to invent an “integration layer” to take care of of the differences between what we want, and what we’re stuck with. This integration layer, often implemented using an Enterprise Service Bus, has become little more than a new platform for implementing (hopefully small) applications, without really helping in creating a clean separation between service providers and service customers. If anything, it increases the dependencies an application has to deal with, especially if the team responsible for its operations is different from the application owners.

Getting rid of the service interfaces

The first step in increasing an application’s autonomy is recognising that there’s no such thing as an independent integration layer, especially if it contains business logic. If the integration layer adds knowledge, e.g. by enriching messages or encoding values, or removes it, by simplifying messages, it is in fact implementing business logic. Such logic should be seen as part of the services provider or consumer, depending on who takes ownership. If (the owner of) the service provider sees opportunities for providing an interface that better matches what the new consumer wants, possibly because there may be other consumers with similar requirements, then it should provide that interface. If the provider has no such interests, then the consumer wants a private adapter and is fully responsible for it. The same considerations can be made for more complex logic, such as is typically implemented with BPEL or BPM processes.

Typically we now have reduced the interfaces between applications to just their service API’s. This is where it gets interesting, because we get into the SOAP vs REST discussion. For me, “SOAP vs REST” is not a discussion on “XML vs JSON”, but rather “services vs data”. A SOAP interface is by definition a collection of operations, or “things to do”, optionally specifying message formats for data going in and coming out, if there is any. As a result, we have grown accustomed to thinking of these services as pieces of program, performing some service for us, hiding behind some endpoint on the network. What SOAP does very well, is leaving that last bit open. Your service may be hidden behind an HTTP endpoint, but could just as well require you to transfer messages through a queue. Whichever way it may be, the message ends up with the service, and has additional information travelling with it to specify for which operation the message is meant. SOAP endpoints therefore tie us to services, which is to say applications.

REST turns this on its head, again by definition, by stating that the endpoint refers to a resource on which you will perform some operation. The starting point is therefore always the data you want to query or manipulate, and this works a lot better in reducing the dependencies on specific applications. We may want to read data (“GET” in the HTTP protocol), create or update it (“POST” and “PUT”), or even remove it (“DELETE”). For the outside world, not just the Internet, but also other parts of our organisation, this should solve a lot of interfacing problems, because the are no longer dependent on what we thought up as things to do, but only on what data we have. That is actually a lot stabler basis for communication. We can go even further, and transform the data and what happens with it, into a stream of events. Applications interested in our data can now elect to query for it, or plug into the stream for a continuous flow of relevant information.

Instead of continuously thinking about what we are doing, and providing a service if someone else might also want to do the same thing (the Service Oriented Architecture), we are looking at the result, and consuming and producing streams of data. This immensely reduces the complexity and frozen ties between applications, because we are never hunting for an application to provide data; we just plug into the stream.

This entry was posted in Microservices, Software Architecture. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s