As part of co-operation with apidays conferences, and apidays Helsinki 2020, a joint online event was held in collaboration with Joint Research Centre of the European Commission on public and private sector co-design on API development.
After presentations by public sector organizations from around Europe, a panel discussion summarized and aimed to answer the questions coming from these public sector organizations.
Private sector seasoned API experts Alan Glickenhouse (IBM) and Marjukka Niinioja (Osaango, panel chairperson), as well as Lorenzino Vaccari (European Commission, JRC consultant) discussed API-related questions presented by the public sector speakers in part II as well as topics brought on by the audience.
Panel concentrated on questions related to lifecycle management, discoverability, API design and security.
Part 1 : How should you handle API change management?
Part 2 : What is the impact of distributed architecture to API lifecycle?
Can we have APIs with different data models in the same marketplace?
Vaccari: During the research we conducted, we have learned that building a universal data model, or ‘ontology’, for everyone is impossible. Thus, the solution is to create a minimal set of core data models, let the modelers create specific models and use AI solutions to match these models, when needed.. The linked data initiative is fundamental to support this semantic interoperability and, also, the European Commission organises annual conferences and initiatives such as the SEMIC one. In our current project we are investigating using these techniques also to find and map similar APIs in API repositories and also, if it is possible to do it on the internet. But to support these search and matching techniques, it is fundamental, once again, that APIs are well documented.
Glickenhouse: I agree with the previous comments. For example, in the 1990’s IBM tried to create an “uber-definition” and get everyone else to agree to and use it. After a year, it was already outdated.
Niinioja: Around 2006 I thought that if I ever heard the word ontology again, it would be too soon. Back in those times, it was a buzzword and while a lot of good things came from those early efforts, I think the expectations were too high. But we should continue to discuss a question about using the DCAT metadata model for describing data in catalogues.
Mika Honkanen from the Finnish Digital and Population Data Services Agency explained that the DCAT-model is used for open data catalogues and in the latest version there are at least five different ways to describe open data APIs, so it’s not useful yet, but maybe in the future. He continued to elaborate Finland’s approach to API catalogues: “We have at least two different kinds of catalogues, X-ROAD (together with Estonia) for very secure APIs and open data catalogues. These catalogues are only lists of APIs, not API management solutions. We also have a national semantic tool for semantic data. The next step is to figure out how to use this tool to build APIs.”
Niinioja: Schema.org and industry specific vocabularies are used quite frequently in the private sector APIs. There are some issues depending on the technology and specifications used. Mainly how well the specifications support the schema.org structure by default. Still, it is used often, especially in web development. One of the better side effects is that using schema.org helps to improve search engine optimization and other methods of automatic discovery. One interesting other topic is how the API “products” are described to promote interoperability. Some API management tools, like IBM’s, use an “API product” JSON/YAML definition file as an extra layer to the actual API definition. The question is how to approach this in a more generic way?