API Lifecycle Management

Migrating to microservices to transform user experience and build resilience

539views

This article discusses migration from monolith to microservices, transforming UX, and building resilience
A brief overview of The Economist Intelligence Journey

Microservices and Resilience has been the talk of the town, as different companies worldwide have started to get into this mode of reusability, transforming the user experience, building resilient engineering teams, and delivering products that are resilient to the client’s needs.

Using APIs allows businesses and engineering to explore avenues for sustainable growth and expansion. At The Economist Intelligence, we are on this transformative journey already having taken giant strides in this direction. For us, it changes the way clients access our content through products leveraging sustainable enhancements in technology on the back of top tech and product leadership which is
brought to life by our fantastic engineers who are our “game-changers”.

The Economist Intelligence generates a huge amount of content in verticals like country, industry, risk analysis and much more. These are our pillars as in this ever-changing world, we impact the decision making and thought process of the different industries, organisations and the decision-making bodies in various countries where we have clients.

When we talk of transformations, it’s important to quantify what we are transforming. Covering 200 plus countries, 350 plus indices, around 250 analysts shaping numerous products that serve over 100 clients ranging from less than 10 to over 10,000 users per client. Considering these numbers, monitoring observability and ensuring the higher availability of the application may turn out to be an improbable challenge to ace with a monolithic application.

The Economist Intelligence Viewpoint is basically a journey of transformation that is not just a technical turnaround but a client driven, team focussed and delivery oriented mindset that helped us bring our ‘A’ game to the fore.

From a product vision standpoint, it was mission critical to orient our business goals to the client’s current & potential future needs. The resilient change in the engineering approach was built around our approach to technologies and newer ways of working. We brought in a lot of new technologies, which meant that while we are catering to the user experience for our products, we are also catering to the user experience for internal clients with change to internal publishing systems and workflows

Bringing our old legacy-based application into context the first problem for us was the monolithic architecture (and a Classic ASP.NET application stitched to a SQL database). That brought us numerous limitations on the lines of extensibility, scalability, cross-pollinating content, cross platform content provisions etc. That also meant that our internal teams had a restricted scope of innovation.

From a technical perspective, we always have to be ready to serve new content and enable our editorial to surface that content, like building customised user journeys.

With the problem clear and a vision in place, the first step was to decompose the application. This is a critical starting step because, in a monolithic architecture, the decomposition of application can be game-changing or fatal. To do that, we had to get down to the grassroots to understand what was set up.

The next step was building a parallel decoupled system pointing to the database which can allow the older system to work as is and new system to just use raw data from the database without having to touch on any aspects of what was coupled with it on the legacy backend integrated user-interface. This enabled us a seamless end user migration to the new platform once it was ready.

The new system contained Extract-Transform-Load mechanisms, decoupled storage mechanisms, a new search engine and a completely decoupled UI which meant that the entire workflow contained individually built, monitored and maintainable units.

This meant that if we now wanted to refactor any part of the application we could easily identify the collaterals. Any defragmentation in the database would have limited points of impact to application given the defined connectivity of the application components.

The application content served as APIs meant that we could make our user interface more modular, reusable and resurface newer content quicker. This also meant that clients could now directly integrate content into their models, feeds and applications with the Viewpoint compatible across different platforms.

Transformation is not just restructuring the applications but also ensuring that the technology you use is for the future. We used React and .NET Core to enable modular UI and build the framework for APIs. We use Amazon Web Services (AWS) for our DevOps platform. We built a strong monitoring and observability platform using LogzIO, NewRelic in addition to CloudWatch alarms that poll on our system and our eyes to any potential issue that can creep in.

REST API allows us to enable access for all our analysis to our clients and include it as a part of the forecast models, setting up their dashboards and many more broader use cases.

We set our authorizations and authentications using Amazon Cognito, which identifies user access via identity and access management solutions. We use the AWS API gateway component, with analytics and logs captured via CloudWatch, that we can ship back to the developers and clients who use this product.

Underneath the AWS API gateway are technologies supported like ElasticSearch data APIs that we have built for our internal use and that support multi-platforms and multiple products built across this viewpoint application and for different products.

Content curation is dependent on client entitlements. It’s very important to understand that not every client would need everything. A client looking for changes in the automotive industry in India might not want to have the same subscription as a user looking for election updates in the United States.

So we built a complex entitlement system. It’s complex because it’s a big network in terms of how our users can get individual access to components. But it was simplified from a developer experience perspective by allowing the users to take / drop that access at any given time in their product subscription.

On the back of an API driven mechanism building entitlement endpoints – we moved a bulk of subscription migration and handling out of manual scope of work. We were able to integrate clients, sell services independently and instead of selling this as a one big piece of cake we gave them the option to have a slice depending on the appetite to consume.

Earlier, we served clients based on what we supported; now, we serve what they require and align it to industry standards.

What we achieved

    • Availability and dependability – While we achieved all the business success by supporting what our clients needed, supporting business requirements, and adding new products, we also built a technology stack for the future. The platform became much more available and dependable. Applications and APIs are now being monitored by multiple automated systems, like New Relic, LogzIO, and AWS, cloud watch alarms, and so on.
    • Scalability – The extensibility of the solution is now able to deliver new features and the system is able to scale as we grow in respect to product offerings and number of active users.
    • Flexibility – Flexibility is the option our users and developers have when adopting APIs. That also meant that it consistently conformed to the API’s behaviour when you’re building a platform because if the APIs are not consistent, it doesn’t go well with the dashboards and the forecasting models of the clients.
    • Quality – We ensured that we worked on the testability and consistent conformance of the APIs’ behaviour.
    • Reduced time to market – We also ensured that the time to market got reduced by development effort optimization.
    • Performance – We worked on the performance as well. So New Relic is a great platform to set up all these measurements, wherein we could trace all the performance of the APIs and the different transactions and dependencies involved. The applications were also measured on the APDEX score.
    • From a product and editorial standpoint, we achieved high responsiveness to global events and the scope of innovation to surface content.

Resilience starts from within

One of the critical changes that we underwent was a resilient mindset and an agile shift in ways of working. In addition to all the technology enhancements, these were the foundations of how we began the turnaround

      • The fundamentals of a team charter that aligned us on :
        ○ Vision and Value proposition
        ○ Team processes and how they involved best practices around development, quality
        assurance and release management.
        ○ Communication, Coordination and Collaboration
        ○ Authority and Accountability
        ○ Ownership
      • Thinking for the future and making sustainable technical decisions
        ○ Running continuous elaborations and evaluations with all engineers having an equal voice
        to ensure they are heard and valued.
        ○ RFC documents for complex use cases and key medium and long term technical
        decisions.

Key takeaways

  • An efficient architecture lays a foundation for all the components of web development and delivery.
  • Tools like New Relic are quite efficient and enable monitoring, and resilient architecture with microservices reduces the risk of failure.
  • While you are transforming, it’s very critical to have an eye on the risks over the rewards
  • An API-first approach can have a powerful effect on how reliably internal and external clients interact with your services.

So one of the key learnings for us was to consider designing and building your API first and join those who have already managed to build this and induce positive changes into your strategy. It will save you time, reduce spending, and boost performance. A resilient mindset and architecture is the way to the future.

Shourya Ranka

Shourya Ranka

Engineering Manager at The Economist Intelligence Unit
With over 8 years of experience in the front end engineering domain, I have experience in delivering interactive, responsive and quality driven user interface. I have a background in Computer Science Engineering and hold an expertise with HTML, CSS, JavaScript and use of libraries and frameworks. Having delivered UI for numerous projects over a wide set of technology stacks like AEM, .NET and Node JS stacks, I have an eye on the application architecture and how that is influenced by the user interface and experience. My thought process is framework agnostic and I believe the beauty of the application lies in how it responds to functional and non-functional requirements which determines the choice of technology. I believe that software is about teams and processes which is what the future of technology and delivery beholds. Product is about what it solves and delivery is about how it responds which is driven by how teams work in a setup and how cohesive and responsive are the processes. I have an experience of leading engineering teams that have delivered large scale project with new features and iterative enhancements released continuously in an agile setup. My managerial side believes people are over software and the growth of my team is a strong indicator of my potential and progress as a leader.

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture