
Claudio Tag is the Chief Architect at IBM Automation in Asia Pacific and leads the technical sales team. In this article, he discusses the evolution of artificial intelligence, especially the introduction of generative AI.
Artificial intelligence (AI) is a term we’ve been using for the best part of the past 70 years, something that was theorized long ago. But during these seventy years, it’s got different incarnations, from the techniques that have been created to mimic cognitive functions in the human mind to what we now call machine learning, which is really around predicting using inputs and mathematical models to try to predict patterns and future behaviors.
Deep Learning is the first real implementation of trying to replicate how the human mind works, doing more advanced things, like speech or image recognition.
Finally, two or three years back, we started strongly introducing the idea of Generative AI. Generative AI is a game changer because, as the name says, it generates new content. It’s not about predicting, it’s not about mimicking, it’s about creating new content.
Suppose I’m running a project using APIs, the whole lifecycle of that API, from developing to testing to rolling out into production to governance. In that case, all of that can be made easier, more consistent, and more effective through the augmentation coming from artificial intelligence.
APIs to enable AI projects.
We’re going to look at how organizations, enterprises, and businesses are adopting AI and how APIs can help, and we’re going to look at it from three different perspectives –
- How interacting with consumers for real business problems requires the usage of APIs
- How we can use AI and APIs to train those AI models
- How APIs enable employment for the enterprise, whether from a consumption perspective, an adoption of the last language model perspective, or a combination of the best form of AI to accelerate projects.
Let’s take a very common use case for AI implementation: a Chatbot that allows customers of a retail organization to ask about the dispatch or the delivery of a particular item they order. If I do that through a natural language processing interface, I will get a good conversation but not the key information. I need a way to connect to the systems and extract the key information the customer has requested. This typically happens through an API.
AI models require access to training and tuning data. We have many generic large language models. For businesses to adopt these models, we must add context from within our organization. That context can be used to train the model so that its answers are more in tune with what the organization does.
The third angle, and probably the one that is the most important today in the way that we’re using machine learning models in large language models, is about enabling the API within the existing business flows. So, let’s take another example of a realistic use case of a business flow that uses AI. For example, I’ve got a customer complaint that gets raised in a CRM system like Salesforce. We want to be able to use machine learning traditional or large language models model to analyze that customer complaint. Based on the result of that analysis, we want to act upon it. Based on the information in that complaint, I want to classify if this requires a specific answer, can be fulfilled, can be handled by a generic department, or needs to be sent to specialist support, or even the urgency to respond to that complaint. I want to extract the key information to be provided to the operator or the complaint manager so that they don’t have to scroll through a raft of documents or a very lengthy piece of text to understand the key information.
Machine learning or large language models produce much of this for us. Typically, these models sit on a public cloud provider and must be consumed from within the organization. That means that those models are typically exposed through an API, and that API is secured using a key. There is probably some monetization around it, which means that we’ve got to put our credit card details to consume that API, and so on.
Rather than exposing your organization’s data out of the enterprise for external consumption, you use API management to consume external API large language model APIs for consumption within the organization. It is the same concept architecturally but implemented in a way specific to AI model consumption.
This gives the organization several things: a single pay plan, so you can optimize the cost of consuming AI APIs within the organization; a single catalog or API developer portal to allow for self-service discovery, making sure that your users within your organization are only allowed to access the APIs that you vetted. It also helps you regulate or control and govern the internal consumption of those external APIs.
An organization would want to use multiple features of those machine learning models and large language models using multiple APIs and potentially even pulling multiple models and integrating them to get the best possible answer. This is called AI API orchestration. GraphQL technology allows you to query the models and ask exactly what you want from the models. Even if that is a complex query that requires orchestration it can ask across multiple models and get a solution that provides the most sense. Graph QL is not the only way of doing this, but it’s a very easy way of doing this.
Empowering API users through the application of AI
The application of AI when creating and managing APIs can substantially improve users’ productivity and efficiency. Not everything is solved by large language models, and not everything needs to be generative. This spans a whole spectrum of API users, from traditional API developers to traditional integration experts, modern site reliability engineers, and even what we call business technologists. These are business people who understand technology and want to use low-code and no-code tooling to create and manage APIs.
Primary Use Cases – AI empowers APIs
The three main areas are simplifying development, accelerating testing, and safeguarding production.
Simplifying development
AI-generated flow creation – API flows can be created using natural language.
Mapping Assist: Help map different data models to each other.
Data Assist – Generate complex transformation functions with no coding
LLM-based API spec creation – Generate Open API specification from legacy code using LLMs
LLM-based back-end creation – Generate back-end code from Open API specification using LLMs
AsyncAPI spec – Generate Async API specs from Kafka and asynchronous endpoints.
Accelerating testing
Automated Test Creation – Auto-generate API test framework using API Specification
Automated Test Case Creation – Auto-generate test cases to probe your API behavior
Dynamic API Security – Detect vulnerabilities, misconfigurations, and malicious actors using ML
AI-insight-driven test generation – Identify gaps in synthetic test coverage compared to real-world invocation patterns
Safeguarding Production
One enormous realm in APIs in the production space is API security. This involves enforcing rules for API protection, denial of service attacks, protection, and so on, and machine learning and generative AI help with this.
Dynamic API security involves detecting vulnerabilities, misconfigurations, and potentially malicious user behaviors through continuous reassessment and analysis of their behavior. Solving this superhuman problem requires machine learning and AI.
To summarize, Gartner says that AI and APIs are intrinsically intertwined. There are not two things that you can think of separately. It means how APIs are used within AI projects to deliver business value to the enterprise and how you can use AI and machine learning to make the job of developing, testing, and managing your APIs in production much easier.