This article discusses how we evolve the API landscape with Web 3.0.
We are transitioning from Web to Web 2.0 to Web 3.0. APIs have been around for a long time.
Earlier ecosystems were predominantly client-server-based models. The client would send an API request. The server would respond with whatever data you requested. This was a linear one-to-one communication. This is how the Internet also evolved HTTP protocols. The majority of the internet protocols follow this paradigm of communication.
Web 3.0 is not a client-server-based communication. It is more of a collaborator and community communication where a collaborator talks to a bunch of community posts hosted on a blockchain network. You have a bunch of nodes, and there is a consensus protocol, which is running a smart contract and making the decision for you. In most cases, it’s a one-to-many or a many-to-many form of communication.
In Web 3.0, where you’re talking about decentralized systems, we know about decentralized apps, decentralized networks, distributed ledger, blockchain, Ethereum, blockchain, etc. We also have decentralized governance. So, we have to adapt the APIs to this decentralized system building.
There are four ways of looking at it. You can consider it as the four stages of API adoption in a Web 3.0 ecosystem –
- API as a data faucet
- API as a data channel
- API as a data broker
- API as a data collaborator
API as a data faucet
An API is like a faucet; you switch it on or off and get a bunch of data. This model of using the API is similar to REST APIs. You have a blockchain network hosting a Web 3.0 application and designated nodes in your blockchain network, which are basically passive nodes. They are not involved in active transactions. But they have the entire history, the distributed ledger history. And you can have a REST API hosted on that through which you can query all the raw blockchain data. In the case of Ethereum, there’s a lot of interesting data you might want to look at; gas prices, transaction data, etc. A lot of raw data is generated daily. Data is the new oil; you can do many things with it, e.g., predictive analysis. This is a very simple application where the REST APIs in their current form can be used as data for sets. They can be attached to a node in a specific blockchain network, and you can query the data. There are many websites on the Ethereum platform that provide this data.
Ether-Scan is one of the most popular websites; they have a REST API that you can subscribe to.
You also have Big-Query, which is more crypto-focused. Spice is one of the upcoming platforms where you can run an SQL query on the Ethereum Blockchain to query blocks, transactions, and logs.
This is a very simplistic form of application of APIs in Web 3.0, where you’re just using them as data faucets.
API as a data channel
When you look at a blockchain network, the underlying network has a lot of data, and many events get generated. Some of them are system-specific events regarding the nodes and the performance. The application and the smart contract that’s running on the blockchain also generate a lot of events.
For example, if you’re doing crypto trading, you have the upper limit and lower limits where you can set your custom notifications and have a REST API to keep quoting the data. But REST is also a request-response model, where we have Sync APIs, which are event-driven. You can designate a block or a node on a blockchain, which can set rules based on certain reminders, events, or system notifications. Then those notifications when the rules match can be published on Async API channels. In the case of REST APIs, we deal with endpoints. In the case of event-driven or Async APIs, we deal with channels. You could define specific channels or users who can subscribe to those channels.
Then, they can get a complete stream of data based on the Web 3.0 application itself or its underlying blockchain network.
These are typical uses where API is used as a data channel.
API as a data broker
A data broker is an advanced form of a data channel. In a data channel, you assign the channels as the forwarding plane for the data. In the case of API as a data broker, the data broker would have the semantic information about the data envelops or the various data sets within the data, which have been pushed to a channel and based on an advanced rule engine. You can propagate that data to individual users. This is similar to Graph QL. In REST API, individual requests have to be pumped individually. In Graph QL, you can club all the requests together and then put them as a server request.
Similarly, in this case, rather than forming individual channels, using our data broker architecture, which will have a semantic role engine based on your rules configuration, you can define the data to be forwarded to which application or user. This is very important in the future of Web 3.0 because, in Web 3.0, there’s a network-wide consensus in the blockchain. Certain applications are evolving, where they will also be a client-side consensus where the data needs to be fanned out to multiple users, especially for use cases related to DAO, the decentralized autonomous organizations. In such cases, it becomes very important to have a Data Broker architecture that can orchestrate data transfer and propagation across multiple client devices, typically the apps that are part of Web 3.0.
API as a data collaborator
The data collaborator scenario is more like an umbrella scenario. It does not exist in a tangible form, in the form of a platform or a specification. In a data collaborator scenario, you have a data broker, which can have semantic information about the data filled out by the blockchain network, and you can define the rules for propagating the data. But in a data collaborator scenario, you can apply APIs to all the layers of a Web 3.0 network. This is a reference architecture, where at the base layer, you have the Internet, and on top of that layer, one of the blockchains, the blockchain nodes, is followed by the consensus protocol.
On top of that, you have the smart contract. On top of that, you have the apps. You already have a set of APIs to work on each layer.
We also have a concept of ABI, the Application Binary Interface for working with smart contracts. But when we look at a data collaborator scenario, we look at APIs which can affect programmable consensus protocol. The consensus can be tweaked based on different Web 3.0 applications, or we’re talking about creating custom bridges between multiple blockchain networks. Then we also have integrations across different blockchain platforms, decentralized exchanges, and NFT marketplaces. That is where you could leverage the data broker architecture to build a Graph QL query and a mutation logic, wherein you can tweak the underlying programmability of the network and its consensus. You can build more sophisticated applications catering to more specific use cases. This is what we are going to see in the next few years.
These are the four stages in which APIs can be adopted in the Web 3.0 ecosystem. The first and the second stage are already being done. Thanks to the efforts in the Async API specification and the Graph QL, it is seamless. In the near future, we will see more such integrations of APIs with the various layers of the blockchain as we evolve in the journey of the web to websites.
Even though the APIs would have been outdated, they’re probably going to outlive at least the current era of the web. As we move towards a fully decentralized Web 3.0 ecosystem, APIs will live with them. They will probably play a much larger role in the programmability of the whole application on the Internet.