What layers of automation are out there?
Effort to test might spread in different layers of automation in the application. Here are some types, I find interesting, based on my experience.
- Unit layer automation – this is rarely a concern of anyone else, but developers and I think this is the way it should be. Normally, the purpose of unit tests is to provide fast feedback if code behaves correctly. Of course, they suffer from the same disease that all automation has and it’s the “confirmatory-demonstrative” mindset of writing tests. Even if used in a test-driven methodology, writing unit tests that only demonstrate the product works, doesn’t help a lot. In fact, any test that don’t push hard to expose problems, is only a demonstration.
Important thing that we should know is that it is absolutely silly to assume that having unit tests means something was tested or that we reduced somehow the need of having other layers of testing. It simply means our code is ready to go forward in the test chain. Nothing more, nothing less! - Integration layer automation – Greatly overlooked type of testing at all. From what I see, many software specialists, in general, fail to understand the difference between unit and integration level tests. In fact, integration is the first layer of things starting to fuck up and giving some meaningful feedback if we are on the right track. Normally, any layer above this one exposes integration problems, but here, we’re just trying to perform it at code level, so we can reduce the “noise” from stuff like services, UI, etc.
The main difference between unit and integration level automation is, while on unit level we try to tests components in absolute isolation, using mocks and stubs for any external components, on integration level we are actually using them. And integration testing is practically one of the most meaningful places to perform tests. I find the lack of effort in this area very weird. - Web service layer automation – web services and APIs are slowly becoming “sexy” for automators and on automation events and the only thing I am willing to ask is: “What the fuck took you so long?!” 😀 I mean, API testing, service testing in general looks like a perfect candidate for automation – it has a protocol usage, meaning we “ask” for data in a specified format that is very strict, we expect answer that is again explicitly stated in a very specific format. It is perfect for automating, we practically only deal with data and formatting, we don’t have to deal with all the unknowns of the presentation layer. It makes sense to invest time in building good set of service layer checks.
Unfortunately, there isn’t a lot good literature on service and API testing or such that explain what are the principles of good API testing, from technical and testing perspective. - Test data generation – another aspect where you can employ automation is generating data and data sets. In our test routines, exploratory test sessions, test cases or whatever type of test approach we are using, we always need test data. Sometimes we need it to vary, sometimes we need it to be random, sometimes we need it to follow specific patterns.
Anyway, it’s practically a trivial task for anyone who knows a scripting language like Bash or Python to write a script to generate random data or data following specific pattern or format.
Sometimes it’s not even necessary to “generate” the data, it’s OK for some purposes to “harvest” the data from a production or staging environment we have and just obfuscate parts we are not allowed to use (sensitive data, emails, addresses, billing data, etc.). It’s not hard, it helps, it saves time and it gives you more focus on primary testing activities, not on secondary ones. - UI layer automation – The “King pain in the ass” type of automation in testing and guess what – people use it to onboard testers to automation. It is silly to dive into explanation what it is and how to perform it, because practically every tester who’s writing a blog and knows how to write “Hello world” in Java, is writing tutorials about UI automation. Few hindsight advises that I find valuable:
- Record and replay tools might disappoint you in long-term, be careful.
- Most UI frameworks (like 9/10) rely on Selenium, so just learn the API
- Think of yourself as testing developer, rather than a developing tester, this will help you a lot.
- KISS – keep it simple, silly – don’t overcomplicate your tests, they are complicated enough.
I will spend a separate topic, or a few on UI automation, so this is enough for now. (because I know how to write “Hello world” in Java, as well 😀 )
- Mobile UI automation – I don’t have any experience on mobile UI testing, but I don’t intend to give advises on it, I just mention it, so people know it exists. Mobile testing is a challenge also – not only you have to deal with all the cross browser compatibility issues like regular web, but you have all the crap related to device OS fragmentation. You also need to learn how to build a framework that reuses some part of you testing logic in two (or more) fairly different code bases (Android, iOS) interact with different UI elements, etc.
- Dev Ops – in case none of the above types of automation looks interesting to you, a whole movement called DevOps exists with the purpose to provide fast feedback and automation to automateable tasks in software deployment, development, testing and many others. This might include also: configuring and setting up environments, managing test environments, managing external dependencies, taking care of CI/ CD and other fancy modern stuff.
- Heart beat and live monitoring – there’s a stigma on production testing and we are all guilty for keeping it alive. Even I, until few months ago, believed that testing ends when we hit production and production is a terrible idea to perform testing at. I was fooling myself with this until I read “Testing Microservices, the sane way” by Cindy Sridharan and it gave me so many insights, that were in front of my nose, yet I missed them.
Our production environment is one of a kind in many ways – it is the only one that matters, the only one that we can’t ignore, it is sometimes impossible to replicate, keeping in mind complex environments with lots of dependencies and it has tons of valuable accurate data about what’s going on with our product.
Therefore it is totally foolish to miss the opportunity to test it and make sure we gather all useful info. We just have to be clear that we can’t test it like we test our testing/staging environments, it’s simply a different type of testing. So, what can we automate in our production environment:- Hearbeat tests – non-intrusive tests that check basic functionalities are operational, without leaving any trace or create significant load.
- Log digging – there are tons of data our systems produce, as well as tons of errors – we have two choices – we ignore it and let our client report the problem, or we monitor it and fix the problems before they get to the client.
- Visual testing – visual defects are sometimes insignificant, but major visual problems look terrible in production sites. Can you imagine the homepage of Nike loading with a broken image? Or without styles? Ugly, unprofessional – this is how we look when we let such sort of issues happen to our clients. We can easily employ automation for simple image comparison, in order to check our styles, layout, visual components look as we want them to.
I hope this was a useful list of layers where automation could be useful. It’s not meant to be exhaustive, I am not the smartest guy in testing, so if you think I missed something, add it, tell me a different story, I’d love to listen to it.
What this list is meant for, instead, is to provide list of opportunities for automators who are interested in testing from a programmatic perspective. Automation in testing is not only UI testing or only Selenium, there are tons of interesting things you can do with code in testing.
Also, as you can see, none of these is having the purpose to replace testing, but to support it, to help us focus on our investigation, rather than deal with shallow human checking like is the page loaded, are the images there, etc.
Thanks for reading! I enjoy you feedback, so don’t hesitate to share it! Good luck!
In production testing, it may be worth mentioning A\B and other sorts of user testing. QA+DevOps is a powerful mindset.
I would add documentation and reports:
– creating human readable summaries of what was executed. Structured and maybe extended with graphics
– creating those graphics
– creating manuals, dynamically generated depending on the process content and its results
– transferring results (of assertions) into readable text e.g. “Test A failed at B for reason C with detail D (maybe: do E)”
– introducing metrics to get accumulated interpretations about a bigger number of data / test results. Especially trends over time.