Hindsight lessons about API testing

Reading Time: 9 minutes

This is the last part of Hindsight lessons about automation that I write for now, I think I wrote enough about my automation philosophy of testing and personal experience with it. The last part that I want to share is related to a project I am involved in for about a year so far and it is related to writing automated API checks for a backend we are currently developing. We are writing the checks in Php, in a framework called Codeception. I won’t spend too much time on the framework, but more on the basics. So, here they are – the lessons about API testing I wish I knew, before I screwed it bad.

business, codes, coding


The lack of information

The first thing I must highlight is the terrible lack of quality writings about API testing. Of course, when I initially started dealing with API tests I knew what an API is, what we use it for, yet I was interested in principles about API test.

API is one of the areas where it make sense to invest in automation, because it is an interface that is consumed by code or by an application. So, it fits perfectly the description of easy to test with code application. Yet, I was amazed how limited is the written information about API testing basics. When I firsts started to deal with it, being a tester for nearly 5 years, I knew how to test an app, but I banged my head against couple of questions:

  • What does it make sense to test via API?
  • Is it the same as testing from another interface?
  • How about negative testing?
  • Are there any design patterns about writing API tests?

A lot of these questions left unanswered, but some of them I answered on my own, by trial and error.

Anyway, I don’t feel like I found a lot of useful written information about API testing, mostly slides from conference talks and short articles. So, if you are familiar with API testing and have experience with it, I invite you to write a book about it, it will be very helpful.

Few worthy mentions, if you are interested in starting or developing your API testing skills.

Exploring your APIs with Postman by Amber Race (slides) – never saw the talk, but the slides look very useful.

Service virtualization by Bas Dijkstra free e-book – Bas was always one of my heroes related to automation and API testing.

This article by Smart Bear – very useful and interesting, especially the 3 levels of API testing

Automating and Testing a REST API – by The Evil Tester Alan Richardson – a very useful practical guide and I really enjoyed the idea of App as an API or API as an App.

Aside from all of these, I know there’s tons of courses and workshops on API testing, but what I often find out is that they are too focused on basics (what is API, status codes meaning, etc.) and not focused enough on goals of testing and strategies to reach them.

What do you test when testing an API?

The project that I am involved in is a REST API and it is consumed by SPA application written in React. I had the chance to be involved in the project from the earlier stages when we still didn’t have the UI and the actual application. This posed some interesting challenges for me. So, how the hell you test that damn API?

When there was no UI, to give you some hints, you practically have no idea how these services are consumed and used. So, the best things that I was able to do, to test it were:

  • Make sure every endpoint works fine as a separate entity
  • Use documentation as an explicit oracle
  • Question whatever was vague and unclear

All of these brought some great insights to me, I learned the following important lessons:

  • Testing endpoints as a separate entity is not even half the work to be done
    It sounded like a good idea to me, but it only works with simplistic endpoints, once you have some more profound business logic, it gets very complicated to test endpoints in isolation. To have holistic testing for your API you will need insight on how they are consumed by the client.
  • Auto-generated docs are absolute trash as an information source
    Normally the doc that is auto-generated doesn’t really help to figure out complex logic, so if you have cases where input may vary or have certain constraints, the doc might be totally useless.
    Anyway, it makes sense to use it as an explicit oracle, saying “if I was a first-time user of that API, what does that give me as information”. It will help you uncover a lot of problems in the API or in the doc itself.
  • More bugs come out of questioning rather than test execution
    This is the part that many people can’t figure out. And it comes to the balance between exploration and scripting, what we should realise when we automate and create tests is that the checks at the end are just a formalized version of the performance of testing, so just like journey is more important than the destination, exploration is more important than the formal scripts that will come out of it. In my experience the actions that I performed during exploring the API brought me much more information about significant problems in the API, than the scripts itself.
    The scripts on the other hand are just the formalization of this knowledge I can easily reuse to make sure it’s still in the state I left it. Once it changes, I’ll have to re-explore it again.
    I will spend more time on that concept once is start my “Hindsight lessons about exploration”

Significant cognitive barriers

Another interesting fact I found out when I was testing the API was that I was having significant problems finding out what data to use (valid, invalid, null, empty string, etc) as parameter for our services.

See also  Learning testing by teaching others: one year of lessons

Example: One of the endpoints related to login was throwing 500 internal server error if you use Chinese symbols as input. I was testing it via the API and it never came to my mind, although it’s one of the first that will come to my mind if I am testing as a text field in the UI.

This has important take-away here – being human users, we need the perspective of a human user or human oriented interface to produce quality testing. Our understanding is limited, our perceptions and thinking are biased, we easily get caught in our own mind traps. If you want to test you API well, make sure you know:

  • What are these endpoints doing?
  • How are they consumed by the front end?
  • What systems do they form?
  • How they depend on each other?

I believe that might be useful to all of you.

Types of tests

Over time trying different things, I found out I can split the tests that make sense to perform into 3 groups.

What do we want to know for an API when related to testing?

  • That all the API endpoints are operating
  • The correct data is returned
  • The endpoints are usable by the client (whatever is consuming them)

So, for this purpose I came up with ideas about the following types of tests.

Status code checks

The purpose of these is to simply check if the endpoint is operational.

For example, in the spec of your endpoint it will say that you have codes like:
200 for success
400 for wrong input
404 for unavailable resource

Try to create very simple tests that make sure all of these are producible.


These will be the tripwire traps of your project, they will make sure nothing is damaged when performing severe changes and will let you know once something acts different. They also must be simplistic, fast and definitive. No ambiguity is allowed here.

See also  I will speak at Romanian testing conference 2018

Few gotchas:

  • These are giving very limited amount of information, treat them like on/off switch indicators.
  • 400 cases might vary a lot, so make sure you cover the ones that matter. Testing some of them might be more useful in scenario context.

Structure checks

Status codes are great, but they are simply indicating if the service is operational according its formal rules. What the consumer cares about is data and code 200 or code 400 give us no clue about data. Also, making status code check on a GET method doesn’t really provide a lot of insight.

I decided it might be useful to try test that proper data is returned.

To do this we used a build in method in Codeception that’s called seeResponseMatchesJsonType, what it does is it looks in your json response using json path (similar to xpath, but used for json) and validates that specific variable of your response contains data of type X.

Here’s an example from Codeception’s site:

// {'user_id': 1, 'name': 'davert', 'is_active': false}
     'user_id' => 'integer',
     'name' => 'string|null',
     'is_active' => 'boolean'

// narrow down matching with JsonPath:
// {"users": [{ "name": "davert"}, {"id": 1}]}
$I->seeResponseMatchesJsonType(['name' => 'string'], '$.users[0]');

You can also do comparison of any kind of variables from arrays to integers and floating point. When it comes to comparison values and patterns, you can do fancy stuff like:

  • integer:>{val} – checks that integer is greater than {val} (works with float and string types too).
  • integer:<{val} – checks that integer is lower than {val} (works with float and string types too).
  • string:url – checks that value is valid url.
  • string:date – checks that value is date in JavaScript format: https://weblog.west-wind.com/posts/2014/Jan/06/JavaScript-JSON-Date-Parsing-and-real-Dates
  • string:email – checks that value is a valid email according to http://emailregex.com/
  • string:regex({val}) – checks that string matches a regex provided with {val}


Might be useful for validating data returned by GET services as they might sometimes contain a lot of information.
The main benefit to me seems to be that you will get warned if data is not returned. For example: you make a post to endpoint X and you get a response with ID for the record in the DB. Not getting that ID might indicate serious problems.

Few gotchas:

Especially for that method.

  • You might get in trouble if the data is variable or non-obligatory, there is no optional flag in the response check, so you either check for it, or you must omit it.
  • You can check for type, format, range, regex but not specific data. Except with using regex.
  • When the response has a lot of levels of nesting the test and the json path you use might get nasty (pun intended :D)

Scenario checks

Inspired by Alan Richardson’s idea about App as an API and to have holistic understanding about the API and its usage, another important point would be to have tests that call few different endpoints in a manner in which the application would do it.


You might say: “Why would I do this, if the front end would do it anyway”. We would like to do it mainly to find gaps between the design of the front-end flow and the data they get from the back-end and it will save a lot of energy for your front end devs.

The idea of these will be to produce simple scenarios. Example: You open the user profile and update it. As a test that will be you perform GET method, you acquire the needed data and you send It as a POST.
Few gotchas:

  • These will take more time to run
  • Will take more time to develop
  • They will require better abstraction in your framework, so you can write tests efficiently.
  • You can easily get caught in trying to produce “human-like testing”. Better focus on variability of your scenarios.
See also  Some kick ass blog posts from last week #1.

Design of the framework

This was one of the points where I banged my head very hard. I was trying to find out an alternative of design pattern for API tests, but I didn’t find any. Anyway, I came with my own design for framework, but it is serving the purposes and needs we have. Here are some guidelines I followed:

  • Splitting logic of tests from logic of the framework
    This is often used in design patterns for UI tests like page object. It is a good idea to write your framework in a way such that your tests are using abstractions over the build in functionalities of the tool you are using.
  • Accelerators
    If you have an action that you perform over and over in your tests it makes sense to pull it as a private method in the test class or in the base class to make code reuse better and minimize errors from copy pasting.
  • Test data generators
    In this case I was lucky to have the support of our developers to help us with creation of tools to generate data and records in the database that might be needed to test the API.
    So again, important lessons here – collaborate with your devs, they might help you do your job better in many ways, not just fixing bugs.
    Example: If I want to test the WordPress API I might need a tool that helps me generate some random post or a user that I can log in with to perform admin related actions, etc.
  • Using programming paradigms and OOP
    During development of your framework, you will find out lots of things that might be done better, more efficient, with more reuse of code, Here’s where programming OOP concepts like polymorphism and inheritance come to help. For example: We had a lot of logic that we were repeating in our setup methods. I decided to create inheritance chain, so a test class will only use the part that it needs, and it will not be copy-pasted in 100 different places.
    So, if you have opportunity to refactor your framework do it, don’t postpone it. It looks like a big-time investment, but it is a good investment, because you’re investing in the better maintainability of tests, faster speed, better readability.
    It will be a mistake to try to engineer the perfect framework up-front. May be if you are a developer with 20 years of experience, it is possible, in my experience – writing code is an organic process, it has its life-cycle and natural process of evolution. If you force the evolution – you get a mutant.

Few things that didn’t happen, but might help

Here’s a couple of things that I didn’t realize yet, for one reason or another, but I think they might help.

  • Null inserter – just create a method or a class that invokes an API method and passes null to parameters in the POST or in the URL. Believe me you will find great defects out of these.
  • Inserter of bad data – like the above, it will make sense to have a tool that inserts bad data in your parameters in a controlled and traceable way. Things like data from the list of naughty strings, for example.

This is the short list of lessons I learned about API testing, for the limited time I had. I don’t consider myself an expert, so if you see something terribly wrong, name it. If you have anything to add, I’d love to read about it.

Thanks for reading. 😉

Please follow and like us:


Senior software engineer in testing. The views I express here are mine, they don't represent any position held by any of my employers. Experience in mobile, automation, usability and exploratory testing. Rebel-driven tester, interested in the scientific part of testing and the thinking involved. Testing troll for life. Retired gamer and a beer lover. Martial arts practitioner.

More Posts - Website

Follow Me:

8 thoughts on “Hindsight lessons about API testing”

  1. Hey Slavchev, thank you for writing this blog. It’s very useful and I’m in the same boat, this will save a lot of time for me to design my api framework. Great work!

  2. Hi,

    This was a good read.

    The only additional testing I was intending was validation testing, sending duff data, and checking the response which should be covered in the status checking that you mention.

    It just so happens I’m starting this API testing framework using codeception also and stumbled over this article looking for design patterns for API testing frameworks. Do you have any examples of the base framework? To show how you build up the body requests and how you set theses out?


    1. Hello, Andrew!
      Thanks for the kind work.
      I don’t have it in a repo, although I am willing to create some proof of concept.
      You can probably look in the Media page of my blog where a video and slide deck with the same name are and review them to get some idea, hope that helps!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.