DX, API Design & Documentation

APIs of the Future: Are you Ready?

897views

Today is about APIs of the future and the question is, are you ready? It’s really our chance to think about what kind of things we can expect to see, we’ve made so much progress in the last 10 years in APIs and it turns out, we’re standing on the shoulders of giants. I’m going to talk about the past of APIs almost just as much as the future. And that, to me, is very exciting as well. I think we’ve got an amazing future ahead of us, we’re actually on the cusp of a new set of paradigms in the way we use API and modular programming and modular thinking.

What I want to talk to you about is this notion of APIs of the future, and I want to do it in my typical way of looking back as much as looking forward. So I want to talk about what I call that first age of computing: where we were, where we’re coming from. I think we’re in what I would call the Second Age of computing, we’re making lots and lots of progress, taking advantage of many things. However, I think we’re kind of closing out that second age. And we’re getting close to that Third Age of computing, where I think things will be vastly different, just as exciting and just as powerful.

Where it all began:

By Science Museum London / Science and Society Picture Library – Babbage’s Analytical Engine, 1834-1871.Uploaded by Mrjohncummings, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=28024313

So I want to start out with where we were, I want to talk about the history of computing. And I want to take us back to 1850. A lot of people don’t really think about the history of computing as starting in the mid 19th century. But that’s where I think it really begins and I think we can learn a lot of lessons from the very first programmer that we know of and that we credit with computing. Back in the 19th century, computers were basically Difference Engines or Analytical Engines. This one I am talking about was built by Charles Babbage and was a series of pins and gears and levers. You would actually wind up these levers, set the pins in a certain way, wind forward, and that would give you the answer to a problem. This is actually one of the first known computer programmes, computing a series of numbers called the Bernoulli number series. It’s all these setups of these various gears and levers and knobs which, if they were all set correctly, would give you the next set of pins by a single turn of the crank. You would then decode this to read what the next number in the Bernoulli series would be. The first known working programmer was Ada King, Countess of Lovelace. She was a mathematician. She was also one of the first people to figure out how we could use these analytical engines to solve mathematical problems. In fact, Babbage, the person who created the machine, mused in letters that he wasn’t quite sure what the Difference Engine could be used for. Ada King knew right away. I find this quote from King really fascinating where she states that the Analytical Engine weaves algebraic patterns the way a Jacquard loom weaves flowers and leaves (a Jacquard loom was a loom for creating blankets and tapestries with set pins and knobs). The analogy is really, really quite striking. Here is another of my favourite quotes from Ada King: “Debugging is damnably troublesome work and it plagues me”. So here we have the very first programmer, the very first programme and we already have problems with it. She didn’t say “debugging”, but she was talking about solving problems. And I would point out, by the way, that Ada King was a remote worker. Ada King was not working in the same room where the Difference Engine was! She would write out the programmes, mail them to Babbage, Babbage and others would programme them and then tell her whether or not the results matched up. This is really tedious work, we think remote work is troublesome today, try doing it through the post, try doing it through the mail service.

First Age of Computing:

By Unidentified U.S. Army photographer – Image from Historic Computer Images, Public Domain, https://commons.wikimedia.org/w/index.php?curid=26253110

Later into the 19th century, is a person by the name of Otlet. Now, towards the turn of the century, we were getting electricity, radio and notions of moving pictures. And Otlet had this notion that computers should be connected together to create a network so that we could see any bit of information from any location. He would call this network the “Mundaneum” (Transcriptor’s Note: aiming to gather together all the world’s knowledge). It would connect video and audio, you could zoom in on an opera performance or read a book at a distance. Otley thought we would all be using workstations in the future, with radio recording, microfiche pictures, photos, cards, all the material that we could call up in order to find information. This is what he called the world wide network at the turn of the (20th) century. 

Now we get close into the 1940s. Computing looked like a giant room. One computer was solely dedicated to computing the mathematical challenges of trajectory of missiles (TN: ENIAC was operated by women such as Kay McNulty and Betty Snyder). These ladies were the people who actually programmed the computer and understood exactly how you had to set it up. The programmes they wrote had some familiarity with the charts from King. However, they were now working with wiring diagrams. These actually indicated the exact wires that you had to plug in from step one to step two in order to set up the computer to make one computation. So rather than setting gears and pins, they were setting wires. 

McNulty, Snyder, Jennings, Meltzer, Bilas and Lichterman were the programmers that actually made the ENIAC, the very first electrical computer, work. They didn’t have any programming tapes or anything, all they had were wiring diagrams, and every time they wanted to change the computation, they had to rewire it and retest every step of the way. We got an advancement to wiring diagrams with the punch card. The punch card came to us right around the 30s and 40s before computing happened, but we only got to apply them to computers around the 1950s when the first IBM computer started using punch cards.

One of the first people working with punch cards was Lois Haibt, who was on the team that created the Fortran language. At the time, the IBM computer was the 70 series, which later became a 700 series. The 700 series was used to monitor and manage the US Space Programme through the 50s and 60s. And one of the people who ran the Space Programme, who actually used the Fortran language to compute the trajectories of missiles, was Dorothy Vaughan. She ran most of the activity for NASA for computing. 

By the time we got to the 1950s, once we had punch cards, we were getting more advanced computers.That is the pinnacle of what I call the first age of computing which went all the way from the Difference Engine, the Analytical Engine, to this card sorting machine that could launch vehicles into space. That’s an incredible distance to travel in just 100 years. Going back to what I said earlier, the back of the Jacquard loom did really look like punch cards, which is why Ada King compared it to programming. So even though she had never seen the idea of punch cards, she understood, even then, what computing would be like in that first 100 years, which I find just fascinating.

Second Age of Computing:

So now, we are in the Second Age of computing, when that starts around 1950. We have mainframes, we start with this notion that we can actually have lots and lots of computing power. Let’s jump ahead to 1968 and to this man, Doug Engelbart, who gave a demo in 1968, about what he thought the future of computing would be. This demo was an incredible event. Remember, this was a time when we basically had punch cards that we fed in and we got paper printouts. What Engelbart started talking about was this notion that 20 to 30 years later, we’d be able to hold in our hands as much computing as exists in the entire world. And boy, was he right? He gave this presentation in 1968. It was like 90 minutes long, just him on the stage with a big screen. And he showed things that we take for granted today that seemed magical at the time. The idea of picture-in-picture, the idea of actually editing lists on screen, so we’d have interactive computing. He actually even created his own computing keyboard, he’s credited with creating the mouse just so that he could do his demo. He introduced in this one event realtime multi-cursor, in place editing, point-and-click, drag-and-drop, cut-and-paste, hyperlinking and hypermedia, intelligent outline-based editing, text messaging, live video and text editing on the same screen, revision control, all these things that he said in 1968 were possible. No one believed him. They thought he was just a nutty person. But of course, we get all of those things. He even invented his own chair. This is a specially built chair from Ames Corporation just for this purpose. Now one of the key elements of Engelbart’s work was this reliance on hypermedia and hyperlinks, which were created by Ted Nelson in the 1970s. We couldn’t have the APIs we have today if it wasn’t for links. Ted’s vision of what the internet would look like would be all of these documents connected together, not machines and performance, but actual documents. We would be reading one scroll, which was actually the source of lots and lots of material. This is what he called the “Xanadu Docuverse”. He wrote a very interesting pair of books, which are called “Computer Lib” and “Dream Machines”, where he talked about how computers would unlock information and allow you to mix and match them. The first person to actually build a system that was widely based on Ted’s and Englebart’s ideas was Wendy Hall, now known as Dame Wendy Hall. She is a mathematician and was asked to collect all of the archives of Lord Mountbatten in England. She decided the way to do this would be to do it electronically and so she and her colleagues invented a system called “Microcosm”. It was the first hyperlinked system of video, audio and text altogether, set up around 1986 or 1987 and therefore functioning before Tim Berners Lee actually had his very first web server running. She had the entire system working using IBM tools at the time.

Now along with the work that Hall had been doing collecting information, Goldberg and Kay were working at the Xerox PARC location where Engelbart had also worked. They were developing things like the Smalltalk language and object oriented programming. And another key element in the story, the Dynabook, the idea that we could actually take a mainframe, or take that desktop machine that was sitting on Hall’s desk and turn it into something that was originally called a luggable, a carriable. Kay and Goldberg said we would eventually all have something that would sit on our laps, it would be like a book, but only better. They started with a computer system called the “Altair system” but wanted to end up with a much smaller version. And of course, by the time we get to the end of the century, we have just that, which is really, really quite amazing. Moreover, by that time, we were not programming with pins, or punch cards, or wires, we were actually programming in text. And this text is written in a language that’s named after ADA. It’s the language of Ada, named after the very first programmer, which I find just really fascinating. So now, because of the work of all these individuals in this second age of computing, we get things like Wikipedia where people can contribute from all over the world, the way Engelbart talked about. We can have all the information at the palm of our hand, we can have it in a simple laptop that goes all around the world and actually lets me talk to you today. And I think that’s really quite amazing. 

The Future:

So where are we going? What will APIs be like in the future? What role will APIs play in all the things that we’re doing? Think about it, we’ve gone from Lovelace, written instructions in tables for mechanical computers from Babbage to McNulty, wiring diagrams for electrical relay computers like ENIAC and Vaughan, Fortran for IBM and Engelbart’s text screen for computers like Wendy Hall built. What’s next? Here are the things that I think we need to pay attention to, things we need to look out for in the coming days, we are going to be connecting services the way we currently connect servers. Remember Nelson’s Docuverse, the idea that we could scroll through something and actually be reading parts of different documents at the same time. That’s what our APIs will be like, as well. We’ll be stitching things together, just seamlessly into pieces, we’ll be programming the network, instead of programming machines, too much of what we do today is programming a machine and placing the machine out to have that machine interact. We’ll actually start programming the interactions directly. Lambda services and cloud services are our first bridgehead into doing that as a normal course of business. It will no longer be where we place computers, or where we place our programmes, it will be the programme itself. 

We will focus on describing problem spaces, not describing solutions. We will programme computers to actually understand what’s going on in the problem space, and we will describe that problem. Then other people can write solutions. Or maybe even computers can write the solutions themselves. Machines, endpoints, protocols, formats, all that stuff will disappear. Instead, we’ll be talking about a whole series of the seamless universe. So connected services, what we think today look like big balls of mud, that’s how reality works, that’s how the internet works, that’s how nature works, that’s how biology works. And that’s how our computers will work as well. We’re going to be programming software defined networks today, we’re going to be doing all our kind of programming in the network in the future, the network itself will be the material that we use to programme with. We’ll be describing those problem spaces using languages and diagrams, and those diagrams, and those languages will be understood by computers themselves. They will understand how to come up and solve a problem would be much more declarative than we are imperative today and all those details of endpoints will go away. Here’s a sort of an experimental language or kind of a network language, where we’re basically connecting to various service or multiple service connections. And then we’re going to do some shopping, we’re going to check out and we’re going to pay, there will be no addresses here and there will be no URLs. This is simply a language dedicated to solving a particular set of network challenges and networks’ problems. And that’s what we’ll be doing more and more of. So just as we change the focus, from memory, to text, to the cloud, we’re going to actually change our focus to actually solving problems with multiple machines. 

Languages:

So how do we get ready for this? What do we need to do? So what we can do right now are a handful of things. First of all, Low-Code and automation are all heading us towards that same place of coding the network, and then describing problems. So be prepared, pay attention to Low-Code systems, whether you like them or not, right now, they represent another way of thinking about computing. And that way of thinking about computing is at a much higher level. And we’re all going to be doing that in some form or another. Start taking a look at decentralised orchestration, we have IFTTT and Zapier and lots of other systems like that, Microsoft Flow, that’s just the tip of the iceberg. We’re going to be connecting computers in lots and lots of different ways. Think of the way lambda works today but with almost zero cost and zero barrier of entry. That’s what we’re going to be building. So how do we start orchestrating various pieces of software living in various places, safely, easily and quickly?

And then the other thing that I think we need to pay attention to is the return of domain-specific languages. So Parsons and Fowler wrote a great book several years ago in domain-specific languages. I would dust that off, take a look at that, start reading that again, because I think we’ve already seen DevOps create new domain-specific languages. RPA and Low-Code are going to do the same thing. Think ahead, think about a GDPR language, or a HIPPA-language, or a BIAN banking language, or a FHIR health language or an ACCORD insurance language, we will be programming in languages. COBOL was originally a dedicated language, FORTRAN was a dedicated translator for math language. We’re going to be having lots and lots of those languages in the future. Now at the same time, I would say you’re gonna want to watch out for Chatbots… Be very wary. Chatbots right now are a dangerous place to be. There’s a lot of romance around them but right now, they can be very, very troublesome and they don’t really offer the chance for advancement that we really need. In fact, I think what you need to focus on instead of something that I talked about here at API days several years ago, is Task-Focused Microworlds (TFMs), domain-specific languages, domain-specific intelligence, domain-specific chatbots. Think about “DoNotPay.com” to help you resolve parking tickets, lots of health experts actually analyse documents, and help diagnose in ways that humans can’t. CoPilot at GitHub is trying to do this, I’m not sure that it’s the right direction but we’re gonna see more and more of these kinds of things in the future. 

If you want to stay informed on AI and machine learning, here’s two people I strongly recommend you pay attention to. Melanie Mitchell has written some fantastic books about what artificial intelligence really is. Her primary work is whether or not a computer can recognise or tell a joke or understand irony… These are fantastic topics. Joy Buolamwini has written a fantastic book on code bias. And that’s going to be turned into a movie sometime really soon. She really has a handle on the troubles that we run into when we work with machine data, machine algorithms. These are two people that I would pay close attention to, they’ll lead us towards the direction where AI and machine language can be equitable, safe and powerful. I’ve actually started a project on DSL languages. So you can actually look up HyperCLI and HyperLANG if you want to learn more about this project. These are examples of actual programmes that work. All of these lead to the same notion: programming services with a network using specific languages to enable task-focused bots operating in well-described problem spaces. That’s our future. We can start working on that today and make that a reality. And I just want to leave you with one more thing.

We have to learn from the future. I love this quote from Joseph Miller: “Those who ignore the mistakes of the future are bound to make them”. Think about that. We know what’s possible, we know what could go wrong, we can prevent bad things from happening today. We can pay attention to what is important, what makes sense, what’s fair, what guides us, what’s safe, what protects our privacy, let’s do those things today, and not make the mistakes of the future. Let’s not ignore those mistakes. Now if you like some of these ideas, I’m working on a book for 2022 called RESTful Web Microservices Cookbook. It’s going to cover a lot of these ideas, a lot of these DSL type ideas, a lot of service-to-service communication, and how we can actually start defining problem spaces in order to create solutions. So if you like that there’s more on the way. So I just want to say thank you very much. I’m very excited about APIs of the future. I’m doing everything I can to be ready. And I hope you will, too. 

Q&A Section

Q: Low-Code and DSLs and things like that are really an important part of the next evolutionary step, because we still do not operate at a high enough level of abstraction in our day-to-day work, our languages are still very much like C level languages, all of those kinds of things. I think there’s a natural synergy between APIs and DSLs, you can think about that many of us work day-to-day with an API, not in cURL or anything like that, but in a CLR, right? So we’re using the AWS CLI, or we use the Google CLI. And in fact, when we work on the operating system, if we’re in the shell, we’re interacting with the API of the underlying operating system using bash or come out, or what have you. And that is a CLI. So is there something really fundamental about this dualism between the CLI and the APIs? Do you think? And how might that get us into these higher level languages? 

A: That’s the way I’ve been thinking about it as well. That’s why I started this experiment with the little Hyper engine earlier this year to explore that very same idea, because I see the same sort of parallelism that you do. Even when you think about mainframes, I used to do a lot of work with some mainframe works and we had batch languages and mainframes where you would schedule jobs and run jobs and backup tapes and all these other things, we always have been using this kind of glue language. You know, Alan Kay said that the way they were thinking about the way Unix would work was this notion of it just being a bunch of little pieces that you stitch together to solve any problem you want. The idea was that there wouldn’t be a world of programmes, there would just be these utilities you stitch together. And that’s exactly what APIs are like, right? We stitch together these APIs and even though it’s not in cURL, often, where we are actually doing the abstractions in our own head. So I think we’re prepared for that but we don’t quite have the tools to make it easy for us. I think we have all the technology but I think APIs are actually speeding us towards this notion of being able to talk in a much more generalised way, we’ve already kind of made the individual machines disappear. We don’t actually type in IP addresses anymore. We use name resolution. Now we need to make the actual locations of those components disappear. That’s going to be the next level and I think APIs are the thing that are going to make that happen.

Q: In the end, I think that also gets a sense of the idea of the citizen developer or the citizen integrator, if you like and, to me, that’s the litmus test in the enterprise space. I will go into an organisation and we will do an API strategy and we’ll figure out whether the APIs reflect their business, and the business might be a bank or an insurance company, or a government agency, or what have you. And we come up with the APIs that will be useful to their business, that will give them reuse, that will give them utility, etc. And very often, they’re the kind of the verbs and the nouns of the business, right? So they become the operating system. And then we build those APIs. And we think, cool, there’s a nice API catalogue. Job done, right? But that’s really the bedrock, right? That’s only the beginning of things. When the job will really be done is when your average business person can sit down and use something like IFTTT or a DSL or a drag and drop citizen integrator panel and go: “Right, I’m going to orchestrate a business process from Cash to Shipping or what have you, using that DSL, using the drag and drop. And it’s sitting on top of the operating system for the business, right? It is something engineers have built, but if you need engineers to work on that, then it’s only half baked in my view. So how far do you think we are from that?

A: First of all, I agree with you 100%. We build the first part of it, but then I think we walk away too soon, I think we need that next part of it. We need to enable and empower individuals, we need to lower the barrier of entry. That’s what Engelbart wanted to do. That’s what Ted Nelson wanted to do. That’s what Tim Berners Lee wants to do. Lower the barrier of entry, make it easy for someone to start, make it easy for someone to contribute and make it possible to extend and expand without breakage. That is all the notion from the last half of the previous century, and we need to bring that forward. Now I think there are definitely opportunities here. Mel Conway of Conway’s Law has been working on this notion that you just described: it should be easy to kind of stitch things together, to drag-and-drop and to pull things. He has a notion that programming should be like making pottery, or assembling a box or a wood ornament, it should be something that we can get our hands around, and everyone should be able to do it. Mathematics and reading used to be gatekeeping in the 16th century. Computing a lot of times is gatekeeping. Today, you have to be super smart, or you have to know mathematics or something. We need to lower the barrier of entry, so anyone can read, so anyone can do math and so anyone can use a spreadsheet. That’s where citizen coding is going. You really have hit the nail on the head! That’s where we have not yet gone. I think we have all the technology and all we need to add is social and tooling. 

Q: I agree, it is also like getting an experience with these computational tools. One of the examples I always think about: Many, many years ago, I used to do a lot of pub/sub style programming, right integration, using pub/sub messaging that would send technical messages around and you’d catch the messages and process them and do things with that. And that was all very alien to most programmers. I remember when I first was introduced to that, it was a completely different model of thinking, of how you interact with systems and machines, etc. And it took a bit of time and a bit of a mind flip to get into it. But you know, many years later, what do we have now? We’ve got Twitter, and we’ve got Instagram, and we’ve got Facebook, and we know pub/sub, we’ve got notifications on our phones, everybody has an intuitive understanding of pub/sub. They might not understand the plumbing underneath it but they know about sending text messages and that asynchronous communication and the benefits and the pitfalls of that. And I think it’s really just a matter of finding those killer apps, if you like, what are the apps that will get us into that mode of thinking, push us over the threshold so that we can use the tools to do those sorts of things.

A: Yes. And I would just say, you know, there are so many talks at this event that really are circling around the same idea, talking about how to make it easier to write, easier to debug, easier to communicate, easier to design and build these APIs that can make these kinds of things possible. So I think we have all the technology and we have all the smarts. I think you’ve collected up a lot of them right here. It’s just a matter of applying them with rigour and persistence more than anything else. Having that vision of lowering the barrier, moving the gatekeepers out of the way, giving everyone a chance to start to contribute in a safe and effective way and I think we’re going to have some amazing contributions from so many people around the world once they get their hands on this computing power.

Q: Tell us a bit about HyperCLI and HyperLang.

A: So HyperLang was started as an experimental project. I wanted for several years to write a client that is able to actually talk in multiple languages, like Hal and JSON and Siren and Uber and Mason and Adam pub and all these. But they’re rather troublesome to build. So I hit on the notion of creating a REPL instead of actually a UI. So that’s how it started. And I wanted to create a stateful REPL, so that I could replay commands and actually even script it eventually. So it really started as a small experiment. But I have to tell you, it’s a couple of months old, and I think it’s gotten out of hand. I have more than 100 unique commands now in this new language. And I think I’ve kind of lost the plot a little… For instance, I don’t know why I did this, but I just finished writing a plugin for SOAP services in XML and XSLT. Basically, the whole idea is you can start to just experiment those in the very same way we talked about, in a cURL-like kind of way, but then sort of think “add Bash to that”, because it can interact with Bash on the command line, “add Node”, because I can use some little programming pieces. And now you can build this tool, you could write a GDPR plugin, or an ACCORD plugin, or an Open Banking plugin. And now you can actually write in the language of Open Banking, you can use the verbs and nouns of Open Banking. Most of the scripts I’m writing have maybe one or two URLs, and the rest are breeding forms and links and just saying, you know, find the link that says neck page and go there. So it’s really a chance to start experimenting. I’ve had one person contribute a new plugin, and I’m actually looking for lots of other interests. So it’s out there. Let’s see what happens with it. It’s our chance to start experimenting with what a new programming experience might look like.

Q: So we’ve talked about how far away the citizen integrator is. One of the questions on the channel is “How far away are we from the self-driving enterprise, where AI becomes the integrator?”

A: So here’s what I’d say, I touched on it briefly. And I only made a small reference to this notion of task-focused microworlds, I think the success of machine intelligence is when you can narrow the space down to a context boundary, stay within the context of reading x rays, stay within the context of solving a math problem, of writing assignments, or whatever the case may be. I think, General AI is something we shouldn’t even bother thinking about. But we should focus on context specific. And we actually have lots of examples of those today, the one I mentioned, the one that there’s a bot that will go ahead and argue a ticket for you, like a lawyer. And it will actually get you out of parking tickets in the United States, they have things like this. Task specific microworlds are the way to go. As long as they don’t have to plan, as long as they don’t have to think creatively and make new connections, then I think artificial intelligence or machine learning makes a lot of sense. The great counterexample to this is from Steve Wozniak who’s got a great idea. Here’s when he will trust artificial intelligence: You put a robot outside a house, and you tell it to go in the house and make coffee. If you just think about how many subproblems have to be solved, in order to do that, then we realise how big the problem really is. But you know what, if I had a bot that knew how to open the door, if I had a bot that knew how to find coffee, if I had a bot that knew how to open a can, start a coffee machine, pour water, if I had lots of tiny little bots… Now we’re getting somewhere, each one of them in their own task focused microworld. So I think that’s the future for artificial intelligence and enabled in-bots. And that is lots of tiny ones that do one thing, and one thing well. The Unix Principle, but we just have 1000s of them available to us. I think that’s when it’ll start to make sense.

Mike Amundsen

Mike Amundsen

API Strategist & Advisor
Prolific author on APIs and micro services and speaker in the API community. He talks about APIs, puts them in the context of Business, Technology and Society, and always tells a really great story.

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture