Test automation – the bitter truth

Recently, I came across this awesome blog post by Mike Talks where he tells the story of his personal experience with automation and all the false expectations that he and his team had about it, and it is really an awesome post, if you hadn’t read it, go do it now.

And I really got inspired by it, I recently write a little post concerning automation and I figured out I had much more to say. I was really busy doing the “Software testing is not….” series, so I didn’t really feel like going off topic. But now I feel the time came to tell each other some bitter truth about “test automation”. It is bitter, because people don’t like it, they don’t like talking about it, they don’t like promoting automation like this, but, yet it’s the truth.

It all starts with love.

I have to say I love automation, I really do. And the reason why I write this post is, because I love testing and automation and I want to give the best perspective how they can supplement each other. Yes, it is related to saying some uncomfortable truths about it, but it has to be done.

So, I said I love test automation, not because I believe it is the one and only solution to all my testing problems, yet I am sure when I was new I had that delusion about it, probably. The reason I like it, is because I like to code. I like writing cryptic “spells” and see them turn into magic and I really don’t care if it’s a source file, an interpreter or a console, I just love it.

I see a lot of people being excited about automation, but for a different reason – because they see it as the ultimate solution to all testing problems, the silver bullet, the philosopher’s stone or you name it, it seems to them that human testing might be  replaced with automated scripts and everything will be great, we will ship the product in no time and the world will be a better place.

I have to object this. And trust me, this is not a hate post against automation, in fact it is post aiming to justify why automation is useful, but useful in the right way, with the right expectations, not the way that we want it to be. So, in order to achieve this, I will have to write for some bitter truths, about automation.

Bitter truth # 1: It’s not test automation, it is tool assisted testing.

James Bach and Michael Bolton did an awesome job, making the difference between “testing” and “checking” , so I think it will be useless for me to repeat what they already said. If you are  not familiar with their work, I strongly recommend you do check “Testing and checking refined” and their white paper on automation in testing – A Context-Driven Approach to Automation in Testing.

In a nutshell, the testing that we perform as humans is not easy to translate in machine language, we can not expect from a machine to act like a human. So, they make the difference between “checking” – shallow form of testing that could be translated into set of instructions and  expected conditions and “testing” – describing the process of human testing with all its complexity and depth. Therefore, the term “test automation” carries a false meaning – we can not automate testing, we can automate checking and it is more correct to say “automated checking”. Yet, I prefer another  term they used, for the reason it sounds more elegant and provides more information – “tool assisted testing”. Because that’s what we do, when we automate something – we use tools to assist our testing, that’s it, we don’t want to replace it, or get rid of it. We want to use tools to enable us, to do more, rather than to replace the human effort in testing.

Bitter truth # 2: Automation doesn’t decrease cost of testing.

I am really interested how does that equation work?. People claim automation reduce cost of testing. Let’s see.

  • You hire an additional person to deal with building the framework and write the tests. (If you think your existing testers can do it part-time, while doing their day-to-day tasks, my answer is … yeah, right ) so that’s additional cost
  • You probably will spend additional money for the tool licence, or otherwise you will use open source tools which means that guy will have to spend additional man hours in order to make that framework work for your specific testing needs.
  • You will have to pay that guy to write the actual tests, not only the framework, so that’s additional cost.
  • The code that you write isn’t “pixie dust”, it’s not perfect, it turns into one additional code base that has to be taken care of, just as every other production code – maintenance, refactoring, debugging, adding new “features” to it, keep it updated. Guess what, it all will cost you money.
  • And of course, let’s not forget about all the moments of “oh” and “ah” and “what was the guy writing this fucking thinking” that you will run into, related to the framework that you use and its own little bugs and specifics, that will also cost you additional money and time investment.

I think that’s enough. The main reason that people give for the “low-cost” of automation is – it pays back in long-term. Now that’s awesome. And it would be true, if we accept that once written a check is a constant that never changes, that it works perfectly and our application never changes until the end of the world. Well, that would be nice, but there’s two problems:

  1. We all know from our testing experience that things written in code don’t actually work perfect, out of the box. In fact it takes a lot of effort to make them work.
  2. If your code base never changes, if it stays the same, you don’t add anything, you don’t change anything, you don’t redesign anything, it is pretty possible that you are going out of business. We are working in a fast evolving environment and we need to be flexible to changes.

So, again. I don’t get it, how having all of the above leads to cost reduction.

Bitter truth # 3: Testing code isn’t perfect.

I see that very often at conferences and in talks among colleagues. It seems that test automation scripts are some kind of miracle of the nature that never breaks, never has bugs, once you write it and it’s a fairy tale. Well, unfortunately it is not.

As stated above already, it is code, normal code just like any other, it has “its needs” – to be maintained, to be refactored and guess what … to be tested :O Why is nobody talking about testing the automated checks, I mean are these guys testers, did they left their brains in the fridge or something?! It’s code, it has bugs, it has to be tested, so we make sure it checks what we want it to check, wake up! Not only that, I bet you, that doing your automation, you will be dealing with bugs introduced by your checks, much more than bugs introduced by the code of the app that you test.

Bitter truth # 4: Automation in testing is more reliable… it’s also dumb as fuck

One of the many reasons why testers praise automation is – it is more reliable than human testing. It can insure us that every time we execute exactly the same check, with exactly the same start conditions, which is good, at least it sounds good. The truth is, it’s simply dumb and I mean machine-like dumb. It’s an instruction, it simply does what it is told, nothing more, nothing less. Yes, it does make the same thing every time over and over, until you don’t want it do that thing any more. Then it turns into huge pain in the ass, because someone will have to spend the time to go and update the check and make sure it checks the new condition correctly. In other words, automated checks are not very flexible, when it comes to changes and changes are something we see pretty often in our  industry.

Bitter truth # 5: Automated checks don’t eliminate human error

For reasons that I stated above I claim that automated checks can not eliminate human errors. Yes, they eliminate natural variety that human actions have, but that doesn’t mean we don’t introduce many more different new human errors while writing the code. The conclusion is simple, while the code is produced by a human being, it is possible it has errors in it, it’s just life.

Not only that, but having our checks automated, we introduce the possibility for machine error. Yes, our code has all the small bugs and weird behaviors that Java, C#, Python, Php or any other language have. The framework that we use also might have these, the interaction that it has with the infrastructure it runs on, might also introduce errors. So, we must be aware of that.

Bitter truth # 6: Automated checks are not human testing automatically

I see and hear that pretty often, everyone saying – “we know we can’t automate everything” and yet they continue to talk as if they could. Not only that, they talk like if they could mimic exactly the same process that a human is doing. And it is not possible. There is no human activity that is cognitively driven, that includes analysis, experience and experimentation to work together in order to achieve a goal, that could be automated, not now, not with the current technology. Yes, AI and robotics are constantly moving forward, if in some beautiful day that happens, I would love to see it. Until then, human testing can not be automated, at least not the one I understand as high quality human testing, what we can automate, though, is shallow testing and act like it will do the job, just right.


Again, this is not a hate post. It is informative post, informative of the risks that automation has. Clever testers know these risks and base their testing on that knowledge. And yet there’s a lot of people running excited from conference to conference, explaining how automation is the silver bullet in software testing and it will do magic for your testing. This is wrong.

Tool assisted testing is useful, it is very useful. And it is fun to do, but we have to use it in the right way – as a tool, as an extension to our testing abilities, not like if it’s their replacement. We should know why and when it works, how it works and what traps it might have for us. In other words, when we want to use it, we should know why and if we are using it in the right way.

And most important, human and machine testing, simply don’t mix. One can assist the other, but they are not interchangeable.

Hope you liked the post. If you did, I would appreciate your shares and retweets. If you didn’t, I would love to see your opinion in the comments. Thanks for reading 🙂 Good luck.

Randomize test execution in TestNG to improve test isolation.

TestNG randomize test execution
Photo by http://testingfreak.com/wp-content/uploads/2015/01/testng1.png

Recently I watched this awesome talk by Simon Stewart which is kind of old, but still awesome, since I am quite green yet in the area of test automation or at least the meaningful and useful test automation. In it he used to talk about test isolation and how important it is. In a few words test isolation is the ability of your tests to run in an isolated environment, but also to run separately, non-depending on each other. So, there’s many ways to do so, may be if you’re interested you should watch the talk, but one of them is to randomize test execution. So, I decided to incorporate this using TestNG, since we are using it in our code and it’s kinda interesting to have this in my tool set, I didn’t find it a lot on the web.

How to randomize test execution using Java and TestNG.

Here are some code examples that I made just for the demo, I repeat for the demo, so don’t mind the moronish names that I’ve chosen for my classes. And it’s fairly simple example – we have a test class with some code, in our case it’s just outputting “in method …”. That’s just to keep track on which test is executed when, what’s the sequence of our test execution. So here’s the code of our test class:

As I said, simply moronish code example. So if we run this as testNG test we get the following output in the console:

And it does that every time we run them. So what we need is to randomize test execution in our class.

Why randomize test execution? How is this important?

Well, if you have a test suite of 30-40 test or more and you run them every time in the same order, try applying this solution and you will find out why. Turns out we as coders assume that we have many things for granted, for example – a specific state of the data in the DB or a specific screen that we land or start from. Turns out, this is not like that and unknowingly we’ve incorporated some dependency on our tests, which is not good if we want to use them in continuous integration or in some other environment. So, the biggest plus from having a good test isolation is – we don’t want to fix the damn thing every fucking time we change the environment it runs at.

So, I’ve searched around and I found this Stackoverflow question, which had my solution.

In a separate package let’s say it’s called utils you may add the following class:

And in order to randomize test execution of certain class you have to add the following annotation to your test class:

So, when your run the tests now, the output looks like this.

Which is a bit better. The cool part of randomizing test class like this is the nanoseconds feed of the random, as you see it is logged on the console and if you find anything wrong in a specific sequence you could always copy the feed from console and hard code it in our random feed, in order to re-create the exact same scenario. Which is fucking awesome.

Thank you all for reading and hope that was interesting and helpful for all of you. If you liked it, please share it with your  friends and/or share your thoughts in the comments.

Testing is a mental process.

It has been a while since I last had the time to sit and write a post here, so I guess I am now a bit rusty and hard to start, I have all these crazy ideas fighting in my head and it’s really hard to make them sit in their spot until their turn comes, or just until I have a clear vision of what’s it all about that I want to write for.

I am already one year and a couple of months in the QA business and I can say I learned a lot of stuff and there’s certainly a lot more to learn coming. But I am not a shy guy, I don’t feel I should be for any reason and that’s why I chose to comment on a topic that I feel is going wrong in software testing.

Labels – are you on the manual side or on the automation side?

Well I guess the answer here is a Matrix like one: “There’s no spoon.” in first place. Why do people still label testers with automation and manual? I see this everywhere – “we are hiring people for manual testing only”. So what should that mean? They are hiring “clicking/tapping monkeys”, or that the job there would be boring and not catching up with the latest tendences in testing? And I see people going extreme when it comes to manual vs. automation testing. I’ve heard stuff like: “It’s a good automation framework but it won’t help you validate if user interface looks good or is usable”. Well, of course it wont, it was never meant to. I feel like many of the testers act like doing automation, they will never do manual testing again or believe that when start doing automation, somebody will probably rip their brains out of their bodies and put them into this Robocop-like suit, where they will never have human feelings ever again and will never be able to simply do exploratory testing or any other manual testing strategy. On the other hand, the “only manual” defenders fail to give a good reason on how you are testing performance with “only manual” testing methods, for example or security or just simple functional tests of the application. I believe that hiring a 1000 monkeys to tap on a button or perform a log in is a way, but let’s stay focused on reality.

The idea is…

What I am trying to say is – it’s not that simple. You know, dividing software testing in manual vs. automation testing isn’t as easy as determining what transmission does your car have. Yes, there is people who are more likely to understand code and complex tools and scripts, and there’s other people willing to touch things and explore them with their hands, but I really don’t think that’s the reason to treat both approaches as if they were something different. Why? Because after all, testing is a mental process, this is what you do in your head, trying to figure out complex scenarios how to expose a system’s weak spots or security issues or whatever. And it has nothing to do with tools, plans or test cases, in other words testing in general answers the questions “What am I doing here? What is my purpose in this team?”. And manual or automation approach is just a “tool” that you are going to use, to fulfill your plan. It really is as simple as that. In other words manual and automation are answering the “How am I going to do it?” question.


As we think of it, the reasonable way to approach testing as it is – as one whole and complete thing. Even if you are so called “automation tester” you will need to perform manual testing at some point – to check usability or any other non functional aspect of an application. And there’s nothing wrong with that, you won’t loose your reputation as a good “robocop” if you do some manual tests.

As for the manual testers, you shouldn’t be afraid to do automation at any point. And it really doesn’t matter if the title says manual or automation tester. Does it include coding/scripting skills? Not really, there is plenty of recorders and other tools with user interface that has nothing to do with coding. Does it mean to be technical? Yes, of course this is what QA means in genreal, you will have to know networking, you will have to know how services work, databases and probably some scripting skills, but it all depends on how deep you want to dive into it. And after all it’s all basic concepts, once you get them, the language, the tool, the project will be just a detail, because you will have all necessary knowledge to tackle the problem.

So how does our perfect QA looks like.

I have a picture, I can show you right now: 1robots-gal-terminator My believe and what I am trying to do as tester is try not to get caught into semantics of phrases like manual or automation tester. To me, testing is an abstraction, it has no form, no taste and no looks, and whatever I do is all up to my creative thinking and imagination. And I like to think of manual and automation testing as the top layer, the application layer of the whole thing, where the abstraction get it’s concrete realisation.

If I have to do some tough and time consuming task that consists of doing same action over and over again, I would love to write a script for it and automate it. Other wise I love to poke stuff on my own while getting to know the application, it just comes natural.

So that’s it, this is what I believe, this is what I recommend. Is it perfect? Hell no! Would it work? I don’t know, go ahead and test it, we are testers after all, so “Trust is good, test is better.” I would love to read your opinion on it.