Most of the times, when we develop software, we rely on third party services to accomplish a certain outcome so that we don’t have to re-invent the wheel. However, this introduces a new level of complexity, especially around testability.
This post is based on a presentation I gave at the “Ministry of testing - Auckland” meetup. Mocking isn’t new, and there are lots of articles on the internet about it, but it is less frequent to see content about mocks in the context of a system testing or end to end testing. Hence, I am grateful for the opportunity to share my learnings about the topic.
I got the chance to learn and implement this technique from the team I was working with at ClearPoint, so I’d like to take this opportunity to thank them for always finding and implementing the best practices to deliver high quality software.
This post will be structured as follows:
- Some of the software integration problems (productivity + testability)
- List a few mocking tools - explanation about mockserver, which we will use
- Demo (implementation of mockserver as a “replacement” of an external system) — You can directly skip to this part if you are looking for the technical implementation only
- A few ideas (not new) about E2E
Some software integration problems
- Teams build software in parallel, and usually there are dependencies between these systems, whether they’re all developed internally or by another company
Should we wait for the other team to finish their implementation, so that we start with the implementation of our system? this might not be an acceptable solution for the company, as software takes months and years to be delivered, and the companies would want to have extra delay and expenses.
should we ask the other teams to prioritize the implementation of our software dependency? What if that was not possible? The third party could have other clients that have a higher priority than our integration…
- Sometimes the systems we’re depending on are already in a working state. However, we might find that the systems’ APIs are rate limited, or slow, or possibly have defects.
We definitely must try to avoid exceeding the rate limit, as this would cost money (when testing or debugging an API, it’s very easy to send hundreds or thousands of requests within hours or maybe minutes 🤔)
What if there was a bug in the system we’re depending on, should we wait until it’s solved?
What if the system we’re dependent on is slow, regardless if that slowness is caused by network or the external APIs are not well performant? Would it be acceptable to slow our process and tests?
- Environment management might be tricky
We might need to manage the data in that system that we’re integrating with, what if the data cannot be automatically created (through APIs)?
How about the system configuration? is it a system that we’re hosting? that might be lots of trouble to get an environment up and running, and lots of infrastructure management
What if the system that we’re depending on requires database management as well? that could be lots of work
This means our teams’ productivity is going to deteriorate, and the delivery of our product might be delayed.
Our tests and CI/CD are going to be highly affected. Here are a few problems based on uncle Bob’s article (When to mock), with very slight modification. These problems might exist when we do not use mocks:
- Slow tests execution
The system we’re depending on has running servers, there are queries that are being executed against its database, there is logic that’s happening in that system. lots of instructions need to be executed when we call an API from the external system.
The data preparation might not be obvious if that system doesn’t provide an out of the box utility or API to handle that, which would also affect the slowness
- The test coverage will be limited
We might need to test errors handling, and the generation of these errors might not be easy in the external system.
We might have edge cases, such as time shifting. We might want to create a record in the past and that system doesn’t allow that. It would be a lot of trouble to go on that server, move back in time to allow that, or maybe update the DB record manually. How to automate it?
We might need to test some dangerous operations such as deleting a record from the database, or deleting a file from the system. Doing this during the preparation of the tests might be overwhelming, and a small mistake could delete the wrong record or file.
- Tests are sensitive to bugs in other parts of the system
When our tests fail in CI, we want them to fail because there is a bug in our system, or there is a case that we are not handling. We do not want them to fail because the system we’re integrating with had a bug, that system should have its own tests to identify issues. We also don’t want to risk some configuration, database or version changes in the external system that might break our tests.
This means our tests will be slow, incomplete and fragile — we don’t want that. This is where “mocks” helps.
First let us define “mock”, the result returned by google is: make a replica or imitation of something. It makes sense, that’s exactly what we need.
Here are some tools that can be used to create mocks:
- mockserver — http://www.mock-server.com
- wiremock — http://wiremock.org/
- mockoon — https://mockoon.com/
- postman — https://learning.getpostman.com/docs/postman/mock_servers/intro_to_mock_servers/
Integration and Mocks
In our demo we will use mockserver, but before getting into it, let us go quickly over a case where we are mocking the response of API requests originating from the browser, using cypress (more info here). So we’re mocking our backend. It works this way:
We have a front-end that calls an API, and waits for a response. If we want to test the behavior on our front-end without worrying about what the APIs are doing, we can mock these APIs. Cypress enables us to stub a response. When we run
cy.server(), this tells cypress to allow mocking. Afterwards we tell cypress what endpoint to mock
name: 'Iron man'
When our tests get executed, and the browser calls the http://api1.test.io/heroes, the response specified in
cy.route will be returned, and no call will reach our server. This is cool.
In our demo, we will test a full system, that has a front-end and a backend, and integrates with external APIs:
The System under testing can be tested in different ways, including testing functionality by accessing the browser, or directly testing the internal APIs (in the previous diagram (api 1, api 2, api 3).
If we were doing E2E tests using the browser, the tests perform some operations on the browser that trigger calls to our backend (internal APIs). Our backend interacts then with the external APIs. These external APIs execute some logic (could be adding records to the DB, deleting records, doing calculations, etc.), and return a response back to our internal APIs, which gets reflected back to the front end. At the end our automated tests does the validation in the browser. Similar process happens when we’re testing our internal APIs (with the browser part excluded).
Note that the interactions between our system and the external APIs is governed by a contract (this is necessary to be able to develop both systems in parallel). The contract is the documentation that defines how the APIs work, what data types to expect, and what formats to send.
As we discussed earlier, the difficulty to develop and test our system is very clear, especially if the external system is still under development (System setup, configuration, data preparation, etc.). Hence the importance of mocks:
In the previous diagram, we first replaced the External APIs with a mock server. The mock server is mainly a piece of software that we install, this could be on a physical machine, VM, or using a docker container. We run this software, and tell our backend to interact with it instead of interacting with the real external APIs.
But how will this software understand how to respond to our backend, and how to interact with the request. We will teach it through “Setting up expectation”.
Without mocks, the first step in our test might be to open the browser and interact with the UI. With the mocks this changes a bit, we need to teach the mockserver how to interact with our API, and give it the data it needs to return. This could be a first step(s) of the test, or can be managed in a before hook… Afterwards, we execute our test steps similar to what we’ve done with the real external system. At the end, we can do the same validation we’ve done with the real external system (e.g. validate a behavior happened on the browser), or we can use another feature of the mockserver, the “verify expectation”. This feature allows the verification that an expectation was consumed through an API call.
The expectation is mainly a combination of the “request matcher”, which will be “equal” to the request sent from our system, a “response data”, which is the response that will be returned to our API, and a “times” object:
The “times” object defines how many expectations do we have. So if it was set to 1, and we did a call to the API, and the request was matched, the expectation gets deleted. Another call to the same API will return 404 — not found, as there will be no expectations left
It is possible to define a “timeToLive” object, which works similar to the times object, but for a certain period of time
The mockserver behaves according to the following diagram. It receives a request, matches the request. If it matched, it performs an action that returns the response associated with that request (the response that is also defined in the expectation).
Demo / Implementation
The full project can be found here.
Below is the project structure:
|----mappings - Functions that glue our project to the marvel APIs
|----index.ts - Entry point, runner of the graphql server
|----components - React components
|----index.js - Entry point to the application - Renders the App component, which is the container of the graphql ApolloProvider
|----endpoints - Functions that mock the marvel endpoints we're interacting with
|----setupLocalMocks - The mock data creation happens in this folder for local data usage
|----createMockExpectations.ts - Generic function that is called to create mocks, usually called from the endpoints folder; this function cleans the expectation if it already exists, then creates a new one
- Install Node.js and npm (https://www.npmjs.com/get-npm)
- Install Docker (https://docs.docker.com/install/) — will be needed for mock-server
Setup / Explanation
In this paragraph, we will setup the api, ui, mockserver and e2e, and explain how they work step by step.
First, start by cloning the repository: https://github.com/AHaydar/mocks-demo/
- Navigate to the ui folder and install the dependencies:
- Run the app in development mode:
- Open http://localhost:3000 to view the apps in the browser
You will see the “No heroes around” message in the browser, as we haven’t started our backend (api) server yet. If you open the browser dev tools, you can notice the following error: POST http://localhost:4000/ net::ERR_CONNECTION_REFUSED as our front-end is doing a call for an API on the 4000 port. Let us fix this.
This is a graphql server that will accept requests from the UI, and send requests to the external API (in our case Marvel API).
- Navigate to the api folder and install the dependencies:
- Before we start the graphql server, we need to setup the marvel private key and public key. To do this, register an account at: https://developer.marvel.com/account. Get the public key and private key and store them in api/config/keys.ts
- Start the graphql server by running:
- You will see the following message in the browser: 🚀 Server ready at http://localhost:4000/
- If you refresh the ui page you had open on http://localhost:3000, you will get a list of characters from the marvel API
- If you open the http://localhost:4000 in the browser, you will be directed to a graphql playground (provided by the apollo-server), where you can run queries against the defined endpoint. In our example, it’s called characters, which returns an array of character objects (formed of id, name and description):
- Type the query as in the left hand side of the previous screenshot and click run. You will get a list of marvel characters with their names and description
How does it work?
When you run the query, the graphql server execute the
listMarvelCharacters function, which can be found under api/mappings/marvelCharacters.ts. This function does a call to the marvel api and returns the response:
The baseURL is defined api/config/environmentConstants.ts as follows:
This means that when we start the graphql server with the mock environment variable set to true, and we hit the characters endpoint, it will interact with http://localhost:1080 instead of the marvel api. We will run the mockserver on port 1080 :).
Let’s try to stop the graphql server and start it with the mock env set to true:
mock=true npm start or just run
npm run start:mock, which is a script created in package.json to do the same.
Run the same query in the graphql playground. A “connect ECONNREFUSED 127.0.0.1:1080” will be returned. This is because the mockserver is not started yet
If you refresh the UI now, you will get the “No heroes around” message.
Open a new terminal
- Navigate to the mockserver folder
- To start the mockserver, run the following command in your terminal:
docker-compose up. Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services.
Once the docker container is started, go to the graphql playground and run the query. A “”Request failed with status code 404"” error will be returned.
This means we were able to connect to the mockserver. However, it still has no data or simulation of APIs (the mockserver doesn’t know anything about the characters endpoint yet).
If we refresh the UI, we will still get the “No heroes around”
Setup local mocks
- Navigate to the e2e folder and install the dependencies:
- Generate the client code to interact with the marvel api. Marvel API has swagger documentation: https://gateway.marvel.com/docs/public. We are using the @openapitools/openapi-generator-cli library to generate the client code based on that swagger documentation. This can be done by running:
npm run build:clients-openapi. This will generate a folder containing all the models defined in the swagger spec.
Why is the previous step necessary? why can’t we directly interact with the API through an HTTP client (axios for example)?
In fact, we can. However, using a code generator based on swagger (openAPI) documentation help us keep up to date with any changes that are happening on the API, and would cause the tests to fail when a change happens.
If we are NOT relying on the documentation (contract), and since we are using mocks, it would be possible that a new version of the new api will be released, and we will not know about it. This version could break our system (e.g. the new API has a new mandatory field which our system is not passing), but our tests will not fail, as they’re still using an old version of the API (mocking an old version of the API). Hence, the importance of this kind of “contract” testing.
This step can be part of the CI/CD, where the code gets generated during the build based on the latest swagger contract. This will enable finding changes or issues early on in the process.
Open the e2e/src/support/mockserver/setupLocalMocks/runner.ts file. Notice that we are relying on a couple of models generated in that file (Character and CharacterDataWrapper). Assume in the future Marvel decided to add a mandatory “image” field in the “Character” model, our code will break because we do not have an image field in our data preparation code.
Let us run the file to setup local mocks:
npm run setup-mocks
Once done, refresh the UI page. Notice that we have mocked data there. If you try to run the query in graphql backround, you will get the same data.
Here’s how this happened:
- The previous command ran the runner.ts file, which calls a main function
- The main function creates a charactersResponseBody based on data prepared statically in the e2e/src/support/mockserver/setupLocalMocks/constants.ts file (change something in this file, re-run the
npm run setup-mocks command, and notice the change you’ve done on the UI
- the charactersResponseBody data object prepared will be passed to a mockMarvelCharacters function defined in e2e/src/support/mockserver/endpoints/mockMarvelCharacters.ts
- The mockMarvelCharacters function calls a generic createMockExpectations function in the e2e/src/support/mockserver/createMockExpectations.ts file, which creates the expectation and its response in the mockserver (as discussed earlier)
The createMockExpectations deletes the previous expectation if it was the same (we do not want to end up with the same expectation created too many times).
- To run the tests:
These tests will create the expectations in the mockserver (similar to what we did in the previous step, the open the browser, and validate data in the browser.
Follow through the tests in the feature file under e2e/src/features/getMarvelCharacters.feature. We are using webdriver.io and cucumber in our tests. Any other test runner or browser automation tool would work.
Mocking benefits and difficulties
So far we know that mocking helps us have a faster development loop, enables better testing, gives us full control over what we’re testing, and most importantly will provide us with reliable build and CI/CD.
Some of the difficulties we faced are:
- Setting up expectations might become a bit difficult with complex APIs, and a bit time consuming
- Using the same data between tests might be problematic, especially when executing the tests in parallel (for example, the first test creates the expectation and another test deletes, which would cause the first test to fail) — It’s preferable to keep expectation data unique per test
- Sometimes API and contracts can get inconsistent. Updating an API without updating the contract would lead to errors on production, but tests running against mocks will be passing
E2E testing is important, but we should try to avoid creating brittle tests that are too long, unreadable or slow to execute. Create lots of small tests that validate functionality in isolation, and systems in isolation. The combination of these small, isolated tests, in addition to contract tests is what forms our E2E tests.
I hope this was helpful, and I would love to hear your comments and feedback. Feel free to send any questions you might have (either in the comments or direct message on twitter.)
Clean Coder Blog
A mock object is a very powerful tool, providing two major benefits: isolation and introspection. But like all power…