Thursday, October 28, 2021

PRO TALK (MICROSERVICES): What are Federated Microservices?
Join on Hopin
Tyson Midboe
Tyson Midboe
Britelite Immersive, Senior Product Engineer, Systems Architecture

Lecture + Demo What are federated microservices? Federated microservices are independently deployable components of a federated application. In a federated application, components are loaded from multiple network locations and repositories at runtime. They are not developed by a single team or built from a single codebase. Multiple federated application components can run in a single application instance, and can be redeployed at any time, without restarting the application or interrupting other components that happen to be running Why are federated microservices needed? Incidentally, the word “federation” accurately describes the structure of the development organization microservices enables: one made up of small, independent teams with internal autonomy. Let’s consider what provides this autonomy and what role distributed systems architecture or simply “distribution” plays. What is the purpose of distribution in a microservices architecture: scalability, polyglossia, deployment independence? Itenables all three. But which properties are indispensable to the kind of organization we want to support? If we conflate concurrency of users with concurrency of developers, we could say scalability is essential. But in the proper sense of the term, you can also scale a monolith. (Consider Facebook.) Polyglossia opens up the potential developer pool and language fitness for purpose. But what makes each team able to innovate at their own pace without being dependent on the activities of the others? Clearly, its deployment independence. So while distribution may make it easier to scale, scalability is not a reason for implementing microservices. You don’t abandon your monolith because it doesn’t scale, at least in the traditional sense of that word. You abandon it because lack of modularity and proper organization has rendered the codebase brittle and difficult to change or augment as business requirements evolve or your organization grows. The main reason for distribution is deployment independence. Without it, teams have to coordinate their releases and new features may have to wait. It’s what allows teams to work at their own pace of innovation and the business to evolve with speed and alacrity. (You get polyglossia for your trouble). The problem is that distribution increases complexity, and not insignificantly. While it ensures modularity, as every component is now isolated by a hard-and-fast boundary: the network; it makes component interactions harder to develop and the system as a whole harder to manage. What was previously an in-memory function call is now a network call. An entirely new set of circumstances apply. The application will take longer to code and will be harder to test and deploy. Test and deployment automation are absolutely required. Performance will be slower. Troubleshooting will be more difficult, et cetera. All of this raises the barrier to entry for organizations looking to benefit, not from distribution, but from deployment independence. Up to this point, putting up with the microservice premium, as its known, was simply considered a trade-off, not a problem with a potential solution. (Although the solution is commonly mentioned in descriptions of the problem.) The result is a disproportionately high degree of failed implementations, missed opportunity for those forewarned of failure, and growing operational complexity. Consider the “death star” effect or big ball of mud. Let's see how federated applications might solve these problems, with technologies such as: self-deployment, distributed object caching, transparent integration, and module federation.