Leo Tolstoy began Anna Karenina with the immortal observation that “happy families are all alike; every unhappy family is unhappy in its own way”. It’s also true of tech firms.
Each of the hapless ones is unique, producing its own particular hotch-potch of back-end horrors, whereas the successful companies look spookily similar when you open the bonnet. Consider Amazon, eBay, Etsy, Facebook, Google and Spotify. They all run on a parallel set of tools and ideas. An engineer could move from Amazon to Spotify, say, and know their way around on day one.
One of the key attributes that these giants have in common is their use of microservices. The concept has become so important that any company failing to rebuild itself around microservices could be considered obsolescent.
A survey of CIOs and CTOs in March by US software firm Kong found that 86% considered microservices to be the future of applications. Moreover, 84% agreed that any firm that can’t ensure the reliability of the application programming interfaces (APIs) that link microservices-based apps is likely to lose market share to rivals that can.
Microservices seem to be crucial, then, but what do they actually do? In the simplest terms, they are the alternative to monolithic codebases. In the old days, software would be composed as a single block, which would be a nightmare to update. Teams would squabble over how and when to commit new code. Any error meant that debugging teams needed to scour the entire codebase to find the culprit.
In essence, microservices split an application into autonomous chunks that work independently. The separate parts sit in the cloud and communicate with each other via APIs. This set-up has numerous advantages, but one of the most important ones is that it enables one team to update a microservice in its own time without having to bother any other party. Errors are easier to pinpoint and fix.
Financial software provider WealthKernel is a strong advocate of microservices. Chris Wright, its CTO, explains: “In software terms, the decomposition of large systems into microservices means that they can be worked on and deployed independently. As the old saying goes: the easiest way to eat an elephant is one bite at a time. This allows for increased team autonomy, faster release cycles and improved isolation of faults.”
He reveals that WealthKernel’s production platform consists of 51 microservices, but adds that “this number grows every time we incorporate new functionality. We expect to have 70 by the end of the year.”
The usual practice is to grant each distinct function in an application its own microservice. Suresh Chintada, CTO of Subex, an Indian firm specialising in software for telcos, explains that “each microservice does one thing really well. This underlying concept allows us to set the bounded context and focus on delivering that single capability with high quality.”
Gone are the days when teams needed to coordinate their activities for every upgrade. Now that each activity has been rendered autonomous, teams can intervene whenever they see fit, he adds. “Development teams can work independently to build or enhance a product by focusing on the services they own. This allows them to operate in parallel, as it’s only the interfaces that they care about to interact with services being built by other teams.”
There’s also the matter of deployment. It is possible for a microservice to be hosted in a different environment – a public cloud platform such as Microsoft Azure, for instance – and work with related microservices hosted on Amazon Web Services or Google Cloud.
“Microservices offer high scalability,” Chintada says. “As each service is a separate component, we can scale it up or down without having to do the same thing with the entire application.”
Naturally, there are disadvantages to this highly modular architecture. With scores or even hundreds of microservices needing to communicate with each other in some cases, there is a greater exposure to cyber risks. Conversations about ensuring data security are never far away.
Governance can be trickier too: in situations where dozens of teams are operating independently with different agendas, it can be hard to coordinate large-scale changes. The costs of implementation and ongoing staffing also tend to be relatively high for microservices.
Such downsides encourage smaller software firms to continue with their traditional monolithic codebases. A compromise approach, known as service-oriented architecture (SOA), may be a better bet for some of these companies. SOA is a design principle under which software components are loosely coupled but mimic independence.
Justin Biddle, head of UK strategy and business development at ecommerce platform Shopware, believes that there may be merit in taking the SOA route.
“While microservices have their place, I’m against the idea that everyone should be using them,” he argues. “Especially for mid-market businesses, the best way to strike a balance between microservices and monoliths is generally to find the middle ground: the SOA architecture. This provides broader applications, with the ability to align best-of-breed services with a decent out-of-the-box solution stack. This way, developers have enough flexibility to extend a platform as required, without excessive complexity.”
Nonetheless, microservices look set to become the default choice in digital transformation. The human factor alone may decide it. After all, engineers want to apply the latest tools and principles – and, right now, that means microservices. Who knows when an eBay or an Etsy might come calling with a six-figure salary for an engineer versed in their ways?
When all the big guns declare themselves in favour of an idea, it’s hard to disagree.