In this part of hindsight lessons about automation I will continue with the principles of automation and I will focus on the one, that I believe is the most significant, at least in my career – the test isolation principle.

Connect, Connection, Cooperation, Hands, Holding

Edit 04 Aug 2018: A reader let me know, that there is more information about it and it has a different name called “hermetic testing pattern”. You can find out more in these resources:

Hermetic testing

Hermetic servers – Google testing blog

What is test isolation?

To be honest, I am not aware of a text-book definition of test isolation, I don’t even see the test isolation principle mentioned a lot, which is the reason I decided to write about it, but I can give you my subjective definition of what test isolation is.

Note: There’s another meaning of test isolation in the context of unit tests, where test isolation principle stands for trying to isolate your unit tests of anything external, using mocks, stubs and other approaches. Anyway, here we will focus on the isolation between tests mainly in the integration and higher-level integration. If you are not familiar with the different layers of testing, I invite you to take a look at the part about layers of automation from the current series.

Test isolation means – to develop and maintain tests that are logically isolated, therefore independent of any other tests that are run in a sequence or in parallel with it. Also, isolated of the environment, specific data or configuration in general. In other words, anything a test needs it should care to create on its own.

I know this might sound kind of trivial, but if you ever wrote code you will be surprised by the implicit assumptions that you are writing in your code, without realising it.

Test isolation is in fact a variation of one of the fundamental principles in software development – strong cohesion and loose coupling. While strong cohesion is something that is not specifically related to this topic, you might be interested in looking at this article, which discussed them both in a really detailed manner.

See also  Testing like Dr. House.

The one that’s interesting for us is – loose coupling. What this means is – our code (classes, methods, interfaces) need to be independent and know nothing about the logic of the surrounding classes, the classes that will invoke it, extend it, etc. If we intentionally or unintentionally break that rule, we start creating what’s called “spaghetti code” – our classes have tiny strings with each other and once we change anything in them, there might be catastrophic, unexpected results.

In testing, the test isolation principle is the alternative of loose coupling – we want to create tests and test functions that are independent, non-relying on other tests, other-wise we might introduce logic that will be very flaky in specific conditions.

A well isolated test doesn’t rely on any other test to be executed or run in order to successfully do its job.

Good isolated tests are specific for the fact that they provide consistent results when they are run:

  • Alone as single test
  • Together as a suite
  • Split in groups like: functional tests only, visual only, regression only, etc.
  • Run in the default order
  • Run in randomized order

How to spot a test with bad isolation?

There’s a couple of criteria that fit in the description of bad isolated test.

  • If you run it alone it fails, in suite it passes.
  • It fails if specific order of execution is broken.
  • It fails in random occasions.
  • It is depending on data or conditions that are set by another test.


Techniques to expose bad test isolation

  • Running tests in randomized order
    Not too long ago I wrote an article about randomizing test execution with TestNG. Some test runners have this as a build in function, but the ones that don’t have it still can do it by simply extending the test runner.
    Why is this useful?
    Well, once you run your tests in random order you find out 100 different ways they break, because you had implicit expectations of the way they worked. This means, when we design and write our tests we are often expecting this variable is set or that condition is met, but in fact our test or our set up method should take care of it.
  • Consistency run – loop of 100 consecutive runs
    Yes, that’s what I mean, running your suite or class for 100 times in a row. And yes, it is not a fast procedure and it will probably not be very useful in a suite of 10000 tests or more, but it is not something that we run hourly.
    The purpose of this is to expose “flaky” tests or tests with bad design and behaviour and don’t produce consistent answer.
    It is easy to use – simply write a console script that calls your suite or test class you want to check 100 times in a row. To optimize it, I set fail first condition – meaning, I want it to stop at the first failure. At this point we don’t care how many of these 100 runs will show failure, if it fails once it’s a problem. You also need good number of logs, because these problems are going to be very slippery to debug.
  • Running test with fresh, different, restored environment
    Another type of “chains” or dependency that we often build into our testing code is the dependency to data. We expect that this record is already there, so I can get it or update it. That’s a problem and this sort of problem can be exposed very easily. Create a fresh instance of the DB you are using with new batch of data (might run some scripts on it in case you have client sensitive data or any that falls under compliance with GDPR or similar standards).
    I had my fair share on rewriting tests that I used one single user for and I expected it will be always there or I used it, because I knew it met the condition X, but these are all bad design decisions.
  • Every test should be responsible for setting its environment
    Another thing that might be useful for your tests, related to environment and configuration is that normally every test should be responsible for creating the conditions it needs to run. In the best case, if there’s a lot of logic that is executed for all tests, it might be pulled into the Set-up method, but it should be executed every time before you run a test, so be careful, don’t put useless stuff there or simply create a good inheritance for your set up.
See also  The non-manual, unautomated tester

This is the short list of techniques and suggestions that I have for you to improve the isolation and break dependencies of your tests. These are simply the ones I know about and the ones I used, if you know more, I’d love to hear about them.

Thanks for reading.


Senior software testing engineer at Experience in mobile, automation, usability and exploratory testing. Rebel-driven tester, interested in the scientific part of testing and the thinking involved. Testing troll for life. Retired gamer and a beer lover. Martial arts practitioner.

More Posts - Website

Follow Me:
TwitterFacebookLinkedInGoogle Plus