API Security & Identity

Cyber-AI in APIs

Photo by fabio on Unsplash
914views

What is Cyber-AI? 

Cybersecurity is very topical, it’s front and centre for a lot of businesses. AI is an area that has recently become more prevalent in technology, and in how business is done generally. The advantages of these two areas, cybersecurity and AI, when applied in APIs would bring about great benefit to digital service delivery. This article briefly explores the value, the advantage, the potential differentiator, in applying or using cyber AI in APIs. This opinion is informed by the experience I’ve had at Aiculus from four years of providing API security solutions. Cyber-AI is relatively new and niche, something that others might call cutting edge. So although this article is mostly based on the experiences I’ve had as CEO of Aiculus, there’s also a bit on looking into the future in terms of what I think is going to happen. 

The first thing that I’d like to put out there is that, as of 2019,83% of web traffic was APIs. At the time, API adoption was predicted to increase fourfold up to 2020. The adoption of APIs is something that we’ve seen get bigger and bigger over the years. For now, the increase in adoption is shaping the transformation of legacy systems to have that API connectivity. This leads to initiatives like what the National Australia Bank announced in late 2020, where they were looking to build over 1000 APIs, some of them internal, some of them external. Again, this reinforces the value of APIs and what businesses are doing in transforming platforms into API-based technologies. 

Unfortunately, as we see a rapid adoption of APIs with that aggressive transformation to make platforms interconnected, there will be breaches and security incidents, even within very big companies. It’s not necessarily due to lack of security, it is just the speed of transformation where you’re taking a whole lot of systems, making them connected from the outside through APIs. It is also due to the volume of services that are being exposed inadvertently in that massive transformation project, there’s bound to be things that go uncovered. So cyber security immediately becomes very relevant in that rapid API expansion or transformation. 

Why Cyber-AI?

So then why Cyber-AI in APIs? First, Open banking; this is a key part of the API ecosystem and is an area that will  benefit from the power of Cyber-AI to ensure the resilience of financial services that Open Banking provides. Apart from Open Banking, there is a rise in the number of API powered digital platforms. These too require the dynamism and adaptivity provided by Cyber-AI. Third, many developed countries are now creating smart cities or smart nation initiatives, where the city authorities strive to increase service and resource efficiency by monitoring these resources, at an infrastructure level, roads, buildings.  and how these resources are being consumed by the citizens. Another very relevant example is what a lot of businesses are doing now post COVID. For example, a business strategy for a retailer would be to have 65% of the sales being physical and 35% being through the online or digital channel. COVID showed that with a lockdown, physical transactions immediately die, and all you have left is digital. So businesses are having to really review that strategy, thinking about the investments in the digital side as opposed to the physical channel. Consequently, a lot of businesses are really working hard to make sure that not only do they have a solid, detailed business channel, but also that this channel can handle very high traffic. Chances are you will need to use APIs to be able to allow for that and all those different sub sectors are powered by the API infrastructure. 

Thus, we’re seeing the global API market grow at 29% year on year. Unfortunately, according to Verizon, the average cost from an API breach is over US$2 million as of 2020.This is the average cost for any business, large or small. If you’ve got an API breach that significantly affects the way you serve your customers, there will be some reputation loss that goes on top of the direct financial loss and that’s a big issue. This is why cyber security is really prevalent and paramount in how APIs are being built and operated. 

APIs come with volume; there will be a lot of traffic in terms of users, transactions and user trends. The traditional approach with technology that seeks to prevent fraud and abuse isto use global rules that define fraud and consequently what should be blocked or disallowed. However, these rules based approach struggles if there are multiple, nuanced use-cases where different users interact with the system in very different ways. AI has an advantage in that mathematical algorithms can model very specific use-cases with high accuracy where normal rules would not. AI also allows the business the capability and flexibility to accommodate the rate of change dynamically and rules aren’t able to provide this. In other words, AI provides the technology that adapts as API users’ demographics change. Rule-bases are poor at adapting dynamically and will need to be modified reactively. 

Challenges in Cyber-AI

There are challenges regarding the proper adoption of Cyber-AI in APIs and in technology in general. The first one being the market hype. AI gets hyped up a lot, such that if a provider has a legitimate AI offering, there’s always this cynicism from the potential customers who get put-off by the so-called buzzword. Cutting through this cynicism is probably the number one challenge to get over. 

The second part is democratisation. AI is naturally mathematical, and has a scientific foundation, think models and algorithms. In order for the audience to understand it, there’s a need to simplify a lot of it. The problem with this simplification is that a lot of useful details also get hidden and sometimes this hides or contorts the true value, the strengths and weaknesses of AI. So in the effort to democratise AI, we are having to accept a lot of loose facts around the technology, what it can and can’t do. And, as much as we do need everyone to understand the power of AI, it is potentially dangerous to provide overly simplified explanations on concepts that would require a deeper understanding of important details from the audience. 

Finally, the last challenge is something that I’ve seen come up a lot: the bias and ethics of AI. It makes it hard for AI to be widely accepted as a useful technology because it can be both used for good and bad. Therefore, with the emergence of AI ethics, the ability to validate and check helps to ensure that the technology does not discriminate against a group of people, does not denigrate or make other people look or feel left out. It’s very important but again, where we stand now as AIs providers, it’s a challenge to try and get over that hurdle. 

Despite it all, the API market continues to grow and ultimately the relevance of Cyber-AI will only get stronger to help secure high value APIs. 

Q&A Section

Q: With the democratisation, how far do you think we are down the road in terms of the maturity of the answers, as to how we mitigate the ability to democratise? Because I think that’s one of the challenges you mentioned? And is there really work that still needs to be done in the AI space to improve how easy it is to democratise and make it easily understood? What AI we have in place, how it’s actually working, and how you’re able to configure and change how the AI is behaving?

A: We might talk about that in two fronts, the technology front, and also just the general understanding part in terms of the audience, the general population out there, from a technology part. When I first got into AI, there was no other platform that you could use for AI. Where we are now is that there’s a lot of so-called notebooks, which have software packages for different machine learning and AI packages. So you easily use a predefined template and you just add the parameters that you need, making it easy to adopt the technology. That’s really cool. The danger, though, is that you don’t know the algorithms that are powering what you use, you really don’t know how far that algorithm can go either way in terms of doing good or doing bad. I get the importance of making AI and machine learning algorithms easily understood by the general population but in the need to be understood, we’re hiding, not deliberately but still, a lot of the details that would make a difference as to how that particular technology is being used, and also perceived. However, I see a lot of people really attempting to understand, taking courses, reading articles that talk about some aspects of AI and I think that’s good. And I think the education sector worldwide is seeing a lot of adoption in terms of AI research, people undertaking AI related degrees and PhDs and stuff. So I think we’re getting somewhere. But I just hope that we don’t get to a point where everybody knows AI, but they don’t really know AI.

Q: With the legal requirements around bias and discrimination, how do you how do you ensure that your AI capabilities aren’t falling foul or have any legal obligations in that area as well? If its behaviour can change over time?

A: The good thing in that part is that there have been some AI standards set. Australia has their own AI standards, in Singapore, where we also operate, they have AI standards, Europe has its own as well so the immediate challenge becomes that if you want to provide your product in Australia, Singapore and Europe, does it mean you’re gonna have to comply with all these standards? Thankfully, they are largely identical. You’ll also have to have a way to quickly assess a particular product against those standards in a very automated way and in a very time effective way. If you have a set of standards, and you’re having to demonstrate compliance with each standard each time, and it takes a long time, then chances are, it’s going to be a long time before businesses see those standards as being useful.

About Aiculus
Aiculus is a Cyber-AI company, meaning we leverage Artificial Intelligence (AI) to provide Cybersecurity solutions. We specialise in API security and have an innovative solution that monitors API traffic to help businesses detect threats and attacks that could compromise the API service or data.  
Omaru Maruatona
Dr Omaru Maruatona is the CEO of Aiculus, a Cyber-AI company that helps organisations embrace API technology without increasing their risk profile. Omaru is a Cyber security and Machine Learning practitioner with deep industry and research experience. Before founding Aiculus, Omaru worked with a big Australian bank in Machine Learning based fraud detection. Omaru has also worked for Computershare as a Technical Security Analyst and for PwC Australia in Cyber security architecture and strategy. Omaru is a thought leader in the area of Cyber-AI and regularly publishes and speaks at various academic and industry conferences including APIDays.

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture