Sudhindra Balaji is the Global Head of Core Banking at Danske Bank. This article by Sudhindra discusses modernizing core banking platforms in a large bank. Modernization comes with the operating condition that we don’t shut down everything when we move towards modernization.
Triggers for modernization
- Organizations have built legacy portfolios over decades. These come with a number of accumulated and accrued technical debt. The architecture has evolved as systems have grown and merged. This creates impedance and friction in how systems can or cannot efficiently work together.
- As systems become complicated, maintaining them requires more effort and costs. It also results in longer delivery times.
- If we don’t optimize and rationalize these application portfolios, it may cause increased operational risk, not only from a technology perspective but also from a people perspective.
- Technology is extremely fast-moving in terms of options and efficiency that it can give. If we cannot keep up, that is a problem. One day, nobody may be interested in working with older technologies that we may have.
- Older technologies significantly restrict our ability to quickly react to what our customers and businesses expect from us, impacting the bottom line over a period of time.
Key focus areas for modernizing
The “Why”
Business drivers – able to quickly, efficiently, and securely launch new features to our customers.
Technology drivers – push us to make sure that we use standard industry technology while operating in our internal and external ecosystems.
People and competence drivers – Push us to look for more sustainable technologies.
The “What” and “How”
We scoped to study the part of core banking that we should modernize. We looked at a risk value-based approach for workloads. The idea was not to take the entire core banking and modernize it but to look at what created the most value and carried the most operational risk. We also looked at different choices for modernization, e.g., SaaS, API enablement, and cloud adoption.
Most importantly, we anchored the modernization efforts so that it could produce continuous business value incrementally, and it was not a big bang where we took five years.
The “Done” State
The result was a new core banking application. We set expectations that there would be a co-existence of the legacy and the modern systems at some point. It was important to ensure that we stopped further technical debt from accruing. This means we actively stopped developing on the legacy portfolio. We ensured that APIs were the default and the de facto way and standard for application integration. It was important to say that this is a journey of using APIs as an enabler for modernization.
Design and implementation choices
For business applications that predominantly ran on the mainframe, a lot of logic, business rules, etc., that may not be related to core banking seeped in over time. So, we were conscious of cleaning this code.
It was very important to ensure that we took the developers and business analysts working on the mainframe along with us on the modernization journey.
On the implementation approaches, of course, we had a choice of public or private cloud. This gave us the necessity for
preparing for hybrid co-existence both on the people and technology sides. Maintaining data integrity is also vital as data co-exist on the mainframe and modern platforms. To ensure that we do not have duplication of functions, functional integrity must be maintained.
Domain-driven design was a key cornerstone for us to start looking at how we slice and dice core banking as a portfolio and how we should modernize in terms of separating workloads. It was important to take an outside-in view so that the APIs, microservices, and functionality produced are recognizable and consumable from a self-service perspective by our consumers outside. We built a team of cloud-native engineers to improve our technical competence. At the same time, we also brought our subject matter experts on Systems and Business functionality along the journey.
3-phase approach to API enabling and moving to the cloud
We used a three-phased approach to enabling APIs and moving the core banking platform to the cloud. Based on workloads, we can skip phases.
Phase 1 – Introducing distributed read
We have the database and the applications on the mainframe and the database, microservices, and applications on the cloud. There isn’t a direct one-to-one mapping between the legacy system and the cloud. Data models and domains were identified. A mapping element between legacy and cloud is important for the co-existence phase. At the end of this phase, we got the ability to give out APIs that our internal and external consumers could consume. Our data, functionality, and APIs interacted with the data on the new data model on the cloud. However, the master was still on the mainframe. This enabled us to decouple integrations from the mainframe, so we could progressively stop the accrued technical debt.
Phase 2 – Strangle Mainframe Consumers
Master data handling is shifted to the cloud. This strangles integrations on the mainframe. We had to be conscious that we still had some consumers on the mainframe. This is the phase where we started gaining advantages from the efficiencies of microservices and the cloud.
Phase 3 – No legacy dependencies
We cut down all dependencies from the mainframe legacy applications in this phase. We are not completely there yet.
These phases gave us internal RESTful APIs for our team to view and consume our data and services. This setup also enabled us to stream data and issue domain events.
Key API standards in play
We always intended to ensure that the APIs were available on a standard API developer portal. Through domain-driven design, it was very important for us to ensure that the discoverability of APIs and people’s ability to understand the APIs’ contents was as self-service as possible. We followed REST and gRPC standards when implementing the APIs. We tried to maintain the simplicity of API payloads. Maintaining consistency of consumer experience across the mainframe and the cloud was important to us.
Consistency in error handling was also important. Output formats were kept similar to the ones the consumers were familiar with.
Test automation and observability were key for us to ensure that we provided a resilient setup on the cloud.
Summary of Learnings
- Establishing drivers for modernization is fundamental for expectation management. We should not treat it as just a technology investment but want to ensure that business stakeholders come along the way and recognize the value of modernization.
- APIs are key for us in the modernization effort. Equally important is that all the backend data and functionality to the APIs encapsulate are cleaned up.
- We should have a clear API adoption strategy. It is not enough to produce the APIs but also to ensure that we socialize the adoption of those APIs to enable growth in consumption.
- Have a clear decommissioning strategy for legacy services.
- Focus on people’s competence and attitude toward modernization.
- Deliver business value in increments.