Microservices in scope.
As mentioned in the first part of this series, where I covered DevSecOps, and the second topic I would like to talk about is Microservices & their benefits…
2- Microservices: One of the most popular architectures — hot potato in the IT industry. If you are a developer or architect or product owner you can’t resist exposing yourself to this term.
In simple words, these are ‘independent services’ which can be ‘released separately’, and are modeled around a bounded context to deliver one(or more)key capability. Often Microservices (MS) is also referred to as MS Architecture. There are many benefits yet some challenges to implementing the Microservices architecture in a suitable way to cater to your own context. The MS embraces ‘abstraction’ which means one can hide the complex details and information while only exposing the details that are required to make that MS function or cater for information that other MS/Consumers need. Once such behaviour implemented, it creates a separation between services/components and allows the changes to be independent of each other.
There are a few key points that one has to remember:
- Bounded Context: this term originates from DDD (Domain Driven Design), where there is no conflicting element within that service. As Microsoft documentation states: to identify bounded contexts, you can use a DDD pattern called the Context Mapping pattern. With Context Mapping, you identify the various contexts in the application and their boundaries. I think, here you need to understand and define a business domain-specific ubiquitous language to depict the keywords, to find a common name for elements, and processes which further shall be consumed by your development teams. Having such language can remove barriers in communication, documentation, knowledge sharing, onboarding new people, and understanding the relationship between application feature code and a business offering.
I have worked in an Energy company earlier for years, where we had 5 different departments with 3 different names for the same types of equipment (Breaker for e.g.) which often created a whirl of confusion and thereafter we decided to use IEEE industry-standard Common Information Model across the Business and Tech. organizations which literally took some time but eventually everyone was able to understand each other — being productive. :-)
But, why I am telling you this?
This is because, it is one of the very first steps that would help you to get to the place — Bounded Context, especially in a large organization. Once you have this ubiquitous language defined, it would allow people to define those non-conflicting boundaries with the help of this language & vocab, which thereafter can be built as microservice(s). Remember, it is crucial to ensure that you are focusing on the granularity of each bounded context because the bigger it is the more monolith it becomes, and the more granular contexts are, it may result in a high number of microservices which then lead you to other complexities.
- Independent, Flexible & Testable: Think about loosely coupled services with specific boundaries, and well-defined & persistent contracts, and independent databases (having a shared database could be a barrier in many ways — of course, it depends on the coherent design and your choice). If one of the MS, e.g. service1 needs data stored & managed by another MS, service2, then service1 should go and ask service2 for the data it needs instead of directly accessing the database of service2. Remember, encapsulation is a good object-oriented programming practice when defining the independence of an MS. That also allows developers to individually be responsible for a service and test that service without worrying about the dependent services (of course some changes may require one to do thorough Integration tests e.g. change in contract of MS).
- Size: This is often seen as one of the key discussion points among developers and business product owners on what are you going to encapsulate into one MS, and how to measure the size? I think one can be inspired by the fact that one development team is different than another when it comes to velocity, capacity, technology-diversity etc. Therefore, the best way is to “ask the team” what is the best definition of Size — as it varies based on the complexity of the system, manageability aspects & how the product is going to evolve. Remember, how easily you can remove or rewrite or replace a microservice can give you the definition of the size.
- Scalability: I still remember one of the projects, where we had to request additional servers on a continuous basis as the size of the monolithic application was increasing as we added new features yet, we were failing to meet the application’s or users demand on performance and availability. Another challenge was that we had to scale everything together yet in different distributed teams — application, data, and infrastructure. Moreover, we did manage our own infrastructure and lacked the cloud capabilities e.g. elasticity. With MS architecture, one can identify the behaviour of load and when the demand peaks, and accordingly design infrastructure to scale in or out — automatically. Containerization is another great practice that has made it possible to easily provision / de-provision the computing resources (this can be achieved in different ways without containerizing your application).
- Freedom to choose diverse technology: I think, this is one of the best benefits one can harvest, as it defies the concept of ‘one size fits all’. As a developer, I have worked on many Integration projects where I would love to stick to my technology choices or database or programming language preferences, in this case, C# while some other product teams were java oriented. This often required the use of a middleware — message brokers, ESB — SOAP or REST oriented etc. in the past. Whereas using MS architecture, I can decide to choose a graph database for optimization or heavy-compute related tasks, and for storing the results I can choose the document-oriented data store. Think in this way, if you have a monolithic design, and would like to try new technology, how big of a pain it would be instead of just trying this out into a smaller service — here MS embraces fail-fast and adaptability to change.
- Business Domain-Based Modelling: One should model the MS around business domains, to depict the real world which would give you the flexibility to add/remove new features from the software product or to reflect the nature of your actual business model. In general, tech. organizations are structured in a line management fashion with discrete skills sitting into various teams and you have services created by those teams to serve for the product/project. As a result, any change that spans through all these three layers has to go into at least three different teams' backlogs and the game of prioritization and urgency begins — whichever business stakeholder shouts the loudest gets his/her change delivered.
But, in the Microservices world, we can solve this problem to tackle changes more frequently. This means, that we need to think about how can we break the big monolithic practices & products built around 3 tier architecture in the Microservices smaller bounded contexts. Where you also have to change the IT organizational structure to build & rely upon long-standing feature teams with diverse skillsets, instead of having 3 different skillsets based teams e.g. front-end developers, backend developers, and Data experts, one can think of using 3 tier architecture as given below in MS context. I work with multiple feature teams owing all these 3 tiers into the microservices being developed and maintained by individual teams and I can vouch that the change occurs very fast and we deliver changes, new features, new tech stacks into our products more frequently than earlier. Of course, the DevSecOps practices play a key role in achieving this but that’s another topic to refer to.
Do we only have bright sides in Microservices architecture?
Of course not!!
So, what are the pain points? To answer that I would start with the phrase “boon or bane” and that depends upon how do you implement MS architecture.
Small Tech. Organizations & Start-ups:
On one hand, Microservices bring various benefits but it might not be a great fit for you, (as some renowned architects & developers talk about MS) if you are a startup and not aware of your full-service offerings often for end customers, which makes it difficult to define a bounded context for your MS, also the product(s) you are working upon might go under various changes beyond each boundary of the Microservices which makes it difficult to manage and integrate & test e2e product. It also depends on the feature team’s size, as often people start with really small size teams to keep the investments low and to try out the market whether they are creating value or not.
Speaking from practical experience, imagine that you are in process of modernizing a big monolith application that has a bunch of application modules and a large database where every kind of data is stored related to that monolith and there are a number of data reports/analytics being performed based on that data. So, moving towards MS path, once you have defined the bounded context for and redefined the schema for each microservice it could be a challenging & additional task for developers/data engineers to have any reporting or analytics run straight forward, as it would now require to first source the required data, transform and relate it from different Microservices DB. Or you can think of building a dedicated Microservice to do that job. On the other hand, you might have your data models defined w.r.t an Enterprise Information Model and that could become a difficult aspect for you to fragment out and relate to the existing information model. This is not only a rework but also raises questions around data security and access to various individuals made available through these MS e.g. GDPR, personal data processing, etc. In a big monolithic application, you can have a role-based or claim-based authorization model but when you break down a monolith you have to carefully examine all those and recreate such authorization layer.
Monitoring is challenging yet achievable with the right tools:
For some, this could be a challenging topic, but I would argue it's not if you have gained some experience building and running Microservices architecture in the cloud and using the right tools for this purpose.
In a monolithic application, you have everything in an orchestration form (usually) where is it easy to track & trace down any issue. However, when you have 20–50 different microservices, your load to monitor each process running under those MS becomes a challenging task. One can for sure rely upon log analytics, log aggregators and put up a ML-based on a well-definerd error catalog. For example, Splunk is a big data analytics platform, which ingests, parses, and indexes various kind of event logs, application logs, server logs, files, and network events and allows you to perform ML to detect the errors and send alerts based on the severity you have defined. Another great tool is DataDog which provides monitoring as a service in the cloud, and it is not only limited to monitoring your services, applications, and event logs but caters to performance monitoring (individual user level — that’s incredible)across the globe. It has certain features that can automatically detect and map the data flow and other dependencies with the service heat map in a nice hexagonal way.
End to end testing:
I think individual unit testing of microservices remains with the responsible feature team or developer(s) but when it comes to End-to-End integration testing, I think Microservices Architecture could be challenging. Not only the functional and contractual based logics but also the network resources, hosts, workloads, application gateways, etc. are also required to be tested thoroughly. Again, this is achievable with a number of tools and the size & number of your microservices. The larger the size & number of MS, the higher the complexity would be.
The abundance of granular Microservices could be inadequate in the longer run:
Well, this is quite self-explanatory by now, if the number of services increases then the complexity and manageability aspects would also become challenging. I also want to highlight the people aspect here. In the good old days, in general, we used to have a large development team working on a big monolith, but when that project is released in production, it was handed over to a small team in operations to run & maintain that application. Whereas in the Microservices world I think the best model suites is DevOps — you build it, you run it — in a way you own it. That means you need to have multiple long-standing teams.
Note: These are some of my thoughts and understanding around Microservices Architecture, please feel free to drop your inputs.