API Testing Series – Testing using stubs
In our first post of this series, we discussed the landscape for the testing of new API initiatives that have dependencies on one or more back office services. This raises many issues for the testing of those APIs with their external service dependencies. More often than not, developers will simply stub out the call so that data is returned to enable testing to continue. This post will discuss how this generally works and the positives and negatives associated with it.
If we look at the basic landscape again and how existing back office services are accessed, it will look something like this:
The challenges with testing the above is that:
- Getting access to these services is difficult because they are likely to have to be shared with others doing similar testing.
- Configuration and understanding of the APIs is required to use them while testing.
- There may be a cost for each call to each of the services required.
For these reasons, developers tend to simply stub out the functionality with a call to a simple piece of code written in the language of the REST Service that returns a set of values which will enable the testing to progress and looks something like this:
Normally the calls to these external services is only introduced very late in the project and thus leads to various issues down the road:
- A lack of testing of the integration with the various dependent services.
- Loading such testing to the end of a project generally results in less time for testing.
- It means a total lack of testing of the unhappy error paths up until testing commencing with the real service. This represents a huge percentage of untested code until towards the end of the project.
- Any CI/CD processes that involve testing with stubs instead of the real service will miss out on testing large amounts of the code in the service.
While this is a pragmatic approach to the issue and enables developers to progress more quickly, it ultimately leads to a number of issues:
- Delays in the release of the software due to lack of testing time.
- Reduce feature set as testing cannot be completed based on delivery dates.
- Poorer quality software due to incomplete testing during the development phases on the project.
- Lack of confidence in subsequent releases do to incomplete testing as part of the CI/CD process.
This has been an issue for some time now and a first attempt to solve this was through what is known as ‘Service Virtualization’. Our next article in the series will discuss how ‘Service Virtualization’ can help but also the issues that it in turn cannot deal with.