API Security & Identity

Transforming Your Network to Secure, Control, and Observe APIs.

Image by Clker-Free-Vector-Images from Pixabay
211views

Ashish Kumar has been part of the tech industry for about 20 years. He works with solo.io. Solo.io is a leading company across the globe that works on solutions like service mesh, API platform, and related cognitive technologies. In this article, he discusses transforming your network to secure, control, and observe APIs.

In a typical reference architecture, we will see an API gateway, like a front end or an edge node, that accepts all incoming traffic or traffic from internal sources or applications. It provides policy control or some level of access control, along with some capabilities like transformation, routing, etc.

The services landscape is distributed, with some services running in the Cloud, serverless virtual machines, or containers. At the same time, you will have some services running in your data centers, typically on virtual machines, but we’ve started to see a lot more containerization in that space. Recently, we have seen a proliferation of some SAS-based services, including identity platforms, logging metrics, CRM platforms, marketing, and data analytics. Quite a number of those services are better utilized or consumed in SAS format. Last but not least, there is a data and a queueing layer that’s usually combined with one or more large databases, long queuing platforms, or message buses.

In the earlier days, this worked well and created many early successes for many organizations looking to modernize themselves using digital technologies. But in these types of architectures, we do not see the underlying nodes or assets that are also required to make this communication happen. The foremost thing is the firewall. To expose the APIs, firewalls need to be handled, which is a time-consuming activity.

So, while we are seeing a SaaS-based posture for an API gateway or any sort of service that requires any level of computing, there are additional things that we sometimes overlook that become quite important for day-to-day operations using those systems.

Let us look at a typical scenario of how a packet moves between the services when internal applications try to talk to each other. Suppose a service running on the Cloud needs to access a service running in a data center somewhere. In that case, it will talk to the API gateway, and the API gateway will talk to one or more identity providers or policy engines to perform an identity check. After that is successful, we’ll forward that or a proxy message to the serving entity. This is called a hairpin architecture and does not work well if the services are next to each other. This problem gets slightly more complicated, painful, and expensive when those serving entities are your data layers.

From a developer’s point of view, developers will implement some business logic in the application they’re building. That is a non-trivial process or activity because of all the additional overhead of communicating with the API gateway. Once that’s done, they’ll implement some admin-related stuff, check-in, and then deploy the app. Then, they build an API proxy, which gets deployed on the API gateway. This must have its assets or modules in individual components that must be built and deployed. This can be an issue if we are using vendor components and the linking is not clear.

Once that’s done, you will perform connectivity tests, assuming that some of the application-level system tests have been done by the CI/CD pipeline. Once all of that’s done, the new application and its API proxy are rolled out into production. This process takes a long time.

To reduce time, we have taken the capability that API gateway provides and deployed it closer to where our services are. This requires fewer network connections. Because it’s a micro-gateway, it typically comes with a reduced feature set, which works perfectly for these use cases.

The deployment does not change much.

The next thing to try and reduce the admin overhead was to use a service mesh. As per its textbook definition, a service mesh is a programmable framework that allows you to do admin things without repeatedly building them. It is a dedicated infrastructure that you can add as a layer to your applications. It allows you to transparently add capabilities like Secure service-to-service communication, observability, traffic management, and policy-based access control without adding them to your code. The services can communicate with each other using the mesh, and at the same time, all incoming and outgoing traffic is controlled. So that improves the security posture significantly and reduces the number of firewalls you need.

Consider that you have an application on a container platform or a virtual machine. Bits and pieces were dealing with communication, etc. The engineers then took these pieces and started writing a separate library. However, this led to different libraries based on the programming language used. Upgrading all applications when the library was updated was also an issue. So, it was decided to decouple these capabilities together. This library can then me available to all the applications running on your infrastructure. This can work in parallel with the API gateway that is being used. Continuing this format, we also realized that the network is virtualized, which means it is a software-defined network. We also realized there’s no real dependency on the physical infrastructure. The services can be spread across multiple cloud providers.

When we have a mesh, we can add and remove applications without impacting the other applications too much.

So, the deployment flow for new applications has gone from having multiple complicated steps requiring a large number of steps to simple steps of building my application and exposing it externally through the gateway.

Ashish Kumar

Ashish Kumar

Director - Field Engineering - APAC at Solo.io
Ashish has spent a large part of the decade building APIs and also designing Go To Market for them. Being part of the field engineering team at solo.io, these days Ashish focuses on creating application networking fabric that brings together APIs and service mesh with Istio.

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture