Have you ever wondered how and from where Microservices came to existence?
Since the early days of IT, several interesting developments along with new learnings have caused the birth of Microservices what we know today. To know more, read on..
There is no single reason why Microservices are relevant in today’s world. In fact, there are six fundamental forces which jointly gave birth to Microservices-way of designing systems.
i. Way of Thinking – It’s the way our design thinking has been evolved over years.
ii. Way of Doing – It’s the way software has been done over years.
iii. Way of Accessing – It’s the way data has been accessed over years.
iv. Way of Improving Efficiency – It’s how efficiency improvements have been taken place over years.
v. Way of Designing Systems – It’s how fundamental design principles have been evolved over time.
vi. Way of Inspiration – It’s with the open innovation and abundant inspiration that’s available since the days of Internet.
Let’s go over each of these.
i. Way of Thinking
Since early 1970s and 1980s, Object Oriented Thinking has been very prominent and successful in designing enterprise systems.
Late 2000, with the advent of Domain Driven Design (DDD), things have changed in terms of representing real-world entities (e.g. customer, order, invoice etc) in the code, making it more reflective of the way enterprises function. It is also termed as divide-and-rule in terms of grouping business functionality around real-world entities (e.g. Customer Service, Order Service etc) which are fully self-sufficient and has clearer functional boundaries.
ii. Way of Doing
Back in 1990, software was built using classic methods (e.g. water-fall and/or sequential).
During 2000, the idea of iterative delivery came into existence, mostly targeted for large-scale projects in the beginning. Here, software was built through repeated cycles (iterative) and in smaller portions (incremental) until the final results meet expectations.
Early 2000s, the idea of extreme programming (XP) came to light demanding several builds and deployments to happen every day, setting the premise for Continuous Integration (CI) concepts. With CI, several branches of the code are baselined-and-merged into a single code base on a frequent basis (e.g. end of every day) leading to automated build and deployment of the code more frequently (e.g. daily).
Early 2010s, the premise of Continuous Delivery (CD) kicked in, majorly attributed by the growth and success of DevOps practices. CD is an approach with which software can be released almost any time. Releasing software all the way till production requires a great deal of automation, in terms of automated provisioning of environments, automated testing, automated build, code profiling and automated deployments across all environments until production.
iii. Way of Accessing
During the early days of IT, Internet Protocol (IP) formed a fundamental basis of how data can be sent over Internet. Later in 1990, World Wide Web (WWW) was released at CERN leveraging IP (version 4) along with release of HTTP protocol (then undocumented).
With the growth of web-based applications, somewhere mid 1990s, W3C (World Wide Web Consortium) was formed which is now acting as an international body for defining & maintaining web based standards.
During early 2000s, HTTP/1.1 protocol was documented & formally released. In short time, it has been supported by several popular web browsers of that date (Netscape, Internet Explorer etc).
During 2015, HTTP/2.0 protocol was released majorly focusing on improving page-load times. By end of 2015, all the popular web browsers supported this protocol.
iv. Way of Improving Efficiency
During early 1970s, Hypervisors (a piece of software/hardware/firmware) which creates virtual machines have a stronger mainframe origin, used to contain full support to virtualization.
In 2000, Desktop Virtualization became very prominent enabling the separation of client and server components involved in an Operating System.
Mid 2000, Virtual Machines became prominent forming a basis to dismantle and re-create any software environment at any time because the entire state & data of a Virtual Machine is stored on disk always. It is an isolated duplicate of a real machine, in a way.
Early 2010, Cloud Computing became very relevant and prominent due to heavy research and marketing efforts put in by leaders like Amazon and Microsoft. Cloud Computing enables a shared-processing model for applications on demand.
v. Way of Designing Systems
Early 1970s, IT systems were analyzed based on Structured Design principles where each system is analyzed based on 4 factors – Input, Control, Mechanisms influencing it and Output it generates. It helped in capturing user requirements properly before designing a system.
In 1980, Object Oriented design became very prominent due to its advantage around scalability, ease of management over legacy systems which were designed using Structured Design principles. It encapsulates data & methods together into an ‘Object’. OOD is all about designing a system of interacting objects to solve a problem.
Towards 2000, with distributed computing taking a new leap, Service Oriented Architecture kicked in, leading a way for applications to create and publish Services for other applications to discover and consume. A Service logically represents a business functionality with a specified outcome.
Early 2010, with the advent of mobile computing, Web-oriented Architecture became the de-facto standard for web-based solutions (online banking, social networking, e-commerce etc). Web-oriented Architecture is an extension of SOA in to the web.
vi. Way of Inspiration
Over the past few years, several big enterprises like these have opened up their internal software engineering practices and tools to open communities leading the way of inspiring other enterprises to think in this direction:
- It takes <30 minutes for LinkedIn to move a feature from development to production, they push new changes to production almost 25-30 times every day.
- For some of the systems at Google, it takes <8 minutes to promote a feature from development to production.
- At Netflix, they leverage more than 1000 metrics to determine the code quality and confidence level before it is pushed to production. All this was possible only with highest degree of automation!