Mark Cheshire is the Senior Director of Application Services at Red Hat. Everybody who is on a journey can contribute to the hitchhiker’s guide. Red Hat is a 100% open-source company. All the work that Red Hat engineers create goes back to the community. And it’s very much in the spirit of that Hitchhiker’s Guide to boost innovation by sharing all of our knowledge.
Elements of Modern Application Connectivity
Business needs driving demand for connectivity – Businesses derive value when cloud-native applications can be accessed and consumed. Business Value only happens when you start to connect applications. When applications are in silos, the value of those applications is far diminished.
Application portability to support agility – Flexibility to move apps across hybrid and multi-cloud based on performance, resilience, or data requirements. Applications need to be more flexible. Your application teams need to be able to respond for resilience. Maybe it’s for cost reasons that you want to move from on-prem to the cloud or from one cloud to another, and it’s important that you have the ability to put applications wherever you can use that capability most effectively.
Combining application and network concerns – Application connectivity to and from services, across and off clusters, combining network and application or business layer concerns. Organizations have been very siloed, and decisions have been made independently, whether by the networking team or the application team. We need to be able to work better together across these organizations and teams to be able to make more effective decisions.
Connecting API and event endpoints – Application connectivity for microservices architecture and event-driven architecture services. Think more broadly, going beyond rest APIs and thinking about any type of endpoint that achieves integration, and especially taking more of an event-driven approach to integration.
These are some of the key concerns that are driving a broader look at application connectivity.
Organizing for application connectivity
The first step in any change project is to think about the organizational aspects. And when you think about organizational aspects, it always comes down to the team. In every organization, you’re going to have different roles. It’s important to balance the needs of these different roles. When you look at the organization, consider the different layers you have for application connectivity. There’s the layer of networking concerns, the layer of application or data concerns, and the layer of business concerns. And each of those has different stakeholders. So, think about the approach to application connectivity and ensure that the stakeholders’ needs and requirements are closely aligned.
Framework to start communication and begin a journey for teams to work more effectively.
The first part of this framework is to look at criteria for evaluating different approaches to application connectivity. There are four key criteria –
- Accessibility – what are the right connectivity requirements between any two endpoints – internal, external, synchronous, asynchronous, etc.?
- Security – is very key. It varies depending on which industry you’re in and how stringent your security requirements will be. But make sure you get this right from the beginning. It’s very difficult to realize later that you didn’t put the right security in place and begin correcting for that after you’ve done an initial implementation.
- Discoverability – One of the values of this API community is it puts on developer experience. When it comes to developer experience, the developer must discover the right integration endpoint—using Open API specifications to make it easy to read the documentation. It is important to consider how easy it should be for the consumers of your connectivity endpoints to be discovered.
- Governability – is all about ensuring you have the right level of control. It’s very easy when you’re doing a quick hello world scenario to bypass and forget about the Governability requirements, but in the real world that you’re in, you have to think about how you can control the whole lifecycle of the applications you’re deploying and how to manage everything. Governability is critical in the real world.
Whenever you look at any connectivity requirement, looking at it through the lens of these four key criteria will give you a good basis to weigh up the right approach.
Application connectivity solution options
When it comes to application connectivity, there isn’t one connectivity approach, which is, by clear, the only right way to go, so you need to think more broadly.
Network concerns – We will discuss the network and application layer. We will not look at the business layer. Think about the ingress and egress to your application environment. The simplest requirement is allowing external traffic into your network to reach your application service by deploying simple ingress and egress routers. Let’s notch things up a bit and look at a more complex situation where you’re running your application in two different clusters, whether for resilience needs or cost management requirements. Here, you can run the same application environment in both clusters and have an external load balancer and balanced traffic between the two clusters. Thus, depending on the complexity, you can decide on the solution.
If you consider service-to-service communication, Kubernetes makes this easy within a cluster. You don’t have to worry about IP addresses or domain names; you can directly reference the service name, and Kubernetes will route the traffic to the appropriate pod running the service, which can be multiple pods. Then, let’s think about communicating across clusters. Here, you’re going to have to expose that endpoint on the public Internet, go through an egress, and ingress onto the second cluster. It may be that you never want to expose those endpoints on the public Internet, and in that case, there are options to think about tunneling the traffic through VPN, or you may also want to look at options for putting in place cluster to cluster bridges. Or a third option is to directly provide endpoint-to-endpoint peer connectivity.
Application and data architecture concerns. It is a big decision to go with REST APIs or event-driven. We’ve seen a huge uptake in REST APIs, and REST APIs are pervasive. But a lot of the growth is around event-driven; it’s more closely aligned to the needs of businesses to respond in real time to what’s happening in the real world. The next is to think about what enablers you can implement to make that connectivity easier. The third aspect is to be able to take a data format that’s exposed by an integration endpoint. And it may not match your needs in the service that’s consuming that endpoint. You may want to manipulate data to get it in the right format so you can consume it easily.
Moving on to other tools that help you around application data connectivity. We have seen an increase in the use of service mesh. So particularly within a cluster where you’re doing service communication, a service mesh is a great way to achieve governability and security requirements. But it’s not always the right answer. That’s a key takeaway I’d like you to think about on API gateways versus service mesh in very large organizations. Think about getting the right combination of API management and service mesh. As you start expanding more and more of these traffic management capabilities, one thing that we see is a proliferation of gateways and proxies on the data plane.
To conclude, to get to the right options for you, look at the different criteria and define the criteria which make sense for your businesses. Secondly, define your integration options, and think about the three layers of network, application, and business. And then, thirdly, evaluate the right option for you.