As part of co-operation with apidays conferences, and apidays Helsinki 2020, a joint online event was held in collaboration with Joint Research Centre of the European Commission on public and private sector co-design on API development.
After presentations by public sector organizations from around Europe, a panel discussion summarized and aimed to answer the questions coming from these public sector organizations.
Private sector seasoned API experts Alan Glickenhouse (IBM) and Marjukka Niinioja (Osaango, panel chairperson), as well as Lorenzino Vaccari (European Commission, JRC consultant) discussed API-related questions presented by the public sector speakers in part II as well as topics brought on by the audience.
Panel concentrated on questions related to lifecycle management, discoverability, API design and security.
Polling vs web sockets vs other approaches?
Questions for the panel:
- What should be designed as part of a generic REST API profile to multiple communication patterns in a business sense (e.g., request/response, publish/subscribe, broadcast, collect)?
- AMQP and MQTT are often quoted as alternatives to REST. Could you share any experience you have with these protocols?
- Is there a recommended/predefined set of metadata that could be included in a generic REST API profile in order to facilitate interoperability with third-party message or data exchange systems?
Glickenhouse: Historically we thought about APIs as REST APIs, maybe SOAP. Now we are seeing more need for asynchronous handling, like AMQP, MQTT, etc. There isn’t one size fits all technology. We need different technologies for different purposes. Right now events are the hot topic, but even they don’t solve all needs. We should think more about consumable assets and then choose the right way to access them.
Niinioja: Some years ago Zapier introduced the “RESThooks” with a slogan “Let’s stop the polling madness”. There are definitely occasions where polling is bad and pushing is good. You still need to consider that you might miss some data with the “push” model and still have the need to poll, just in case. In large scale big data scenarios one might agree that MQTT is a very strong candidate, and might be faster to process for example by IoT devices, because of the smaller payload. Some of these technologies, like gRPC and MQTT rely a lot on trusted servers on both ends and are not therefore best candidates for public or partner APIs. These might still be good for X-Road or e-Delivery use cases we heard by previous speakers.
Vaccari: Smart cities, evolution of the networks like 5G etc. bring the need to think about common technologies and especially the asynchronous technologies because of the large amount of data produced by the devices. There is currently a large set of technologies to explore and their number is growing. Regarding specifically metadata the AsyncAPI specs, derived from the OAS specifications, are raising lots of interest.
Niinioja: I think the conclusion we should come to today is that there is no one size fits all, but there are definitely some principles to use when choosing technologies. If you have a “normal” network, public internet and you are building a digital application and using the APIs to communicate from front-end to backend, then REST APIs, or GraphQL would be a good choice. If you need to transfer lots of data fast between known servers you might be better off with gRPC, but you need the protobuf definition files on both ends with the ability to use gRPC and you need to trust each party. And there isn’t any API management solution, yet, to really help with that.
Glickenhouse: We see this as a wider integration need and we combine for example APIs, Kafka and files under the same integration solution. In the future this is going to evolve and in the future it’s going to cover a larger set of tools. There might also be a developer portal to cover all the different styles.
Niinioja: Looking back to what we discussed earlier about hiding the technical implementation of under the API design is also important when considering what technologies to use and how. For example, many GraphQL engines help generate the API directly from the database structure, which is really bad for change management. Didn’t we learn the lessons already at the database design levels? It’s a security issue, but it also creates unnecessary dependencies and ties consumers to your database schema.