E2E testing manifesto

Reading Time: 7 minutes
E2E mechanism
Source: Pixabay

Pretty recently I had a conversation with my direct manager on the end-to-end(e2e) tests and their purpose. I don’t know if it’s because of my linguistics background or else, but I think better when I write. I decided to list what are the things I believe a good end-to-end test should be or do and what not. And here is how I got to this list.

By the way – this list isn’t meant to be exhaustive, it will change and evolve as my testing and understanding about testing changes and evolves, so please feel invited to share your dos and don’ts on end-to-end tests.

What good e2e test should do?

I am not going to dive into textbook definitions of e2e test, I believe anyone can Google that. Instead, I am interested in what characteristics I find valuable in e2e tests.

It should represent a valid scenario mirroring a real client, using the product we build, to achieve certain goal.

If we take a look at E2e tests from testing terminology perspective we might say they are closer to use cases, rather test cases, given the following definitions:

Test case: Specific testing situation in which the tester performs an action or set of actions, with set prerequisite, environment state and data, in order to validate their beliefs and/or expectations about the functional correctness of the product.

Use case: A scenario in which the end user (hence the name use-case), takes a chain of actions in the product or a path which is aligned with a certain end goal. This often includes multiple functionalities, sub-systems and external outputs.

If we are clever testers, we will be interested in the latter, regardless if for any reason we might decide to perform part of our testing breaking it into smaller functional test cases. Our focus should be on the end user experience.

Given the above – e2e tests are intentionally cross-functional, they will jump from one functionality to another, sometimes even with the help of external apps or resources (a mail agent to check if verification mail was sent, etc). If your test is using only one functionality – it’s rather functional, not end-to-end.

Capture scenarios in their entirety, minding two dimensions

When discussing end-to-end tests, one could easily argue – what ends? Where are these ends? And my answer to this question goes like this: in regard to e2e tests we want to spread our test coverage in two dimensions.

Horizontal E2e – In that dimension we are trying to experience a complete user flow, a path, or a scenario that we know, imagine, desire, or anticipate our end user would follow. Normally, such user journeys should be created in tight collaboration with stakeholders, product management or the client and their representatives. For such scenarios we want to follow the path that our client will follow for sure.

See also  I will speak at Romanian testing conference 2018

One such scenario might look like:
Client logs in e-commerce app, searches for a specific product ex. a hoodie, opens and inspects it, compares to another one, checks out, selects a payment method, pays on the page of external payment provider and completes order.
What set of assertions we would like to make?
Correct item added to basket (correct price and discount if any)
Correct success screen was shown.
Order was generated to the internal system
Correct email was sent to the user, containing order info.
Assert payment was processed correctly, etc

Vertical e2e – including all layers of the tech stack – backend, front end, service and persistence layer, no exclusions, mock ups except for 3-rd party dependencies, if they are not relevant. For example – we might use mock up of the external payment provider or their sandbox env.

Align with use cases or acceptance criteria

Acceptance criteria and use cases are a factor which is often overlooked by both testing and development which could be easily explained with the squeezed time frame of the release process. Anyway, dealing with these and inviting them in your e2e tests is a good exercise for your critical thinking. Overall, what it’s about – releasing software we have to answer the following question – What is the purpose of what we are doing? That software, that app, game, binary, native cloud app – what’s its purpose?

Use cases come in that part to demonstrate what purpose do the software serve and how it’s going to be used by the end user.

Acceptance criteria are sort of the contract we sign in order to validate that.

Where does the term acceptance criteria/test come from?
Back in the days when creating software was the privilege of few specialized software companies, regular businesses signed a contract for the software projects they hired the prior for. In that contract the acceptance criteria were defined – which stands for a set of criteria software needs to possess or tests it needs to pass or demonstrate for the receiving party to accept it and pay what’s left of the deal.
Now, I hope you realize how ridiculous it sounds when product companies say stuff like – we are performing user acceptance testing, when they are both the giving and the receiving party. (And the user,if they decide to “eath their own dog food”)
Regardless of how it sounds, tho, the idea of creating, testing, and delivering software even to internal audiences as if it was something we need to sell, is a good one, it helps us benchmark against real market conditions, unless you want to just fool yourself – then you are lost, no matter what you do.

Executed at the end of the release cycle, after the previous testing phases

I can already feel the riot that heading might cause, so please pay attention to the fact I said “after” and not “instead” previous testing phases, but for that please be patient until the end of the article.

See also  QAshido – The path of the tester. Virtue # 1 - Technical skills.

The e2e testing is supposed to be the crowning ceremony of the overall delivery process. Expectations are all functional and parafunctional bugs should be figured out and fixed in prior phases. When e2e automated run occurs, its purpose is to verify compliance to end user expectations.

Part of the definition of done (DoD) – an unwritten contract that we must commit to, in order to deliver

Many companies I’ve been working for got cold shivers when anyone asked about DoD and shame on them, really. As a software delivery group, including testers and development, it is part of our job to define the set of criteria that marks all processes we regard as necessary to say we are done. I bet you won’t be surprised if I tell you many companies believe a product is done before it was even scanned by a tester or a testing tool. And such companies suffer, they suffer a lot.

E2e is a vital part of that DoD and if you value the quality of your work, your aim is to make it as transparent as possible – such that all involved parties, even non-technical ones, have clear understanding why it is considered done. Hiding stuff, under the cover of “too technical, you won’t get it” is simply a proof of weakness.

What e2e tests aren’t and shouldn’t do?

Meant to exhaustively test all functionality

I think I can bet my career on the statement that cause number 1 for flaky tests, besides the key pressing organic semi-automata(e.g. their creators), is the fact people try to put everything in them, everything. I’ve seen tests that check layout, flow, a button is visible, a result is shown, etc, etc. This is a mess, not an e2e test, it’s cluttered with too many actions to the point it looses focus and direction. Most of these tests are meaningful, they have their point and reason, but they don’t belong in e2e:

  • Tests for layout, shown/hidden element could easily be tested in component tests of the front-end framework.
  • Testing flows and transitions from one page to another belong in functional tests.

The e2e should be reserved for straight-forward user like operation of the product.

An excuse to skip all other types of testing (unit, low level integration, NFT, functional, exploratory)

The tendency of people putting everything in their e2e tests leads to another side effect – they believe they no longer need to do things like unit testing or integration testing or functional, since they have all that, or something that looks like it in their e2e tests. This is wrong for couple of reasons:

  1. By melting different types of testing together each different type lacks focus.
  2. By putting unit/integrations and functional together, you lack the opportunity to have incremental testing process, forming quality gates – meaning, if your unit/integration/functional tests are failing, you shouldn’t even be running e2e tests, yet.
  3. The cost of testing gets bigger – it’s easier, faster, more comfortable to run low level tests, when catching bugs.
See also  The state of testing 2013 - survey

By themselves far not enough

As described above already, e2e tests are happy paths, they don’t aim for absolute coverage, they don’t try to perform in depth testing of a functionality. If you only rely on them, you practically act as if all other testing that might happen doesn’t exist.

Not meant to test parafunctional aspects such as performance, security, compatibility (with different browsers, hosts, envs, etc), installability etc.

This is more of a practical rather than in principal remark – e2e tests are meant to operate from users’ perspective only.
Any machine-driven testing that targets non-functional characteristics, such as usability, performance, security and all other -abilities are target of their own test runs, their own scripts and often their own infrastructure. There aren’t one size fits all tests or one size fits all engineers, I’ve seen organizations where same tester try to produce functional e2e, performance and security tests and results are (if there are at all) pathetic. If you take any of these seriously, you should hire a dedicated expert to deal with that part of testing, instead of piling it up on the poor automation E2e engineer.

Excuse to perform low complexity confirmatory testing, just because “we’ve automated it”.

Seems like this is the disease of our craft – writing tests that don’t test, but just confirm, even demonstrate a product is there and working. E2e tests are meant to be happy path scenarios, because of their nature it’s easy to write low complexity confirmatory checks, the boundary is thin, so be careful! My advice is:

  • Try to write your tests as deterministic as possible.
  • Throw in assertions which are relevant and meaningful to the context. As many of them as possible and sustainable.
  • Don’t rely on “soft assertions” such as waits, yes, the wait will fail the test, but we want a deterministic way to understand why it failed. It’s good when tests fail, because they found bug, it’s bad when they fail, and you don’t know why.

This is my short (about 2000 words) list of dos and don’t about e2e testing, which I ironically called “E2e testing manifesto”, but that’s far not enough. Let’s keep that an open discussion – what are your take aways from E2e testing, which are your dos and don’ts. Let me know in comments 😊

Few more resources if you are interested in e2e tests:
My series on Hindsight lessons about automation
Browser stack has an article on e2e testing.
Article by dev tester on mistakes in e2e testing.
Alister Scott has an article about e2e testing dos and don’ts, too.

Mr.Slavchev

Senior software engineer in testing. The views I express here are mine, they don't represent any position held by any of my employers. Experience in mobile, automation, usability and exploratory testing. Rebel-driven tester, interested in the scientific part of testing and the thinking involved. Testing troll for life. Retired gamer and a beer lover. Martial arts practitioner.

More Posts - Website

Follow Me:
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.