Hindsight lessons about automation: Why automation?

Reading Time: 4 minutes

I started the first post with coding and I made the first and most common mistake that everyone doing automation does – diving in explanations how to do automation, instead of trying to explain why automation is important and beneficial for us (thanks to Jim Hazen for pointing that out to me).

In fact I am happy it happened like this, because it demonstrates how many people approach automation (me included) – they just learn to code, dive into coding not knowing what the fuck they are doing. So, taking a step back, re-thinking and …

why made from interrogation marks
Free image from https://pixabay.com

“Start with why”.

I willfully took the quote from Simon Sinek’s famous TED talk and bookย and I think starting with why is actually beneficial whenever we speak of test automation. This is an idea, that I know Bas Dijkstra also shares in his view of automation and unfortunately, I rarely see it discussed anywhere else.

What normally happens, and that’s why I said it was great I started with “coding”, goes like this – we learn to code and we dive into creating automation, and we code, and we code, not knowing where actually are we going.

Or if there’s a reason why to do automation, it is most likely the wrong, but well marketed reason, like: it’s fast, cheap, modern, cool, better or whatever. If you want a deep dive in the wrong reasons to do automation, take a look at “Test automation โ€“ the bitter truth”.

My task in this post will be to try to list, again from my limited point of view and understanding – the good reasons why automation makes sense to do and invest in.

Why automation is performed in first place?

  • Bring value to testing
  • Automation is fast
  • It is consistent
  • Provides formalized results
  • It is good for solving machine tasks and problems
  • It is good at asserting deterministic conditions
  • It is good for checking facts
  • It is good for calculations
See also  Testing is a mental process.

It also comes with some “gotchas”…

  • The value brought is only a small fraction of information that we need to provide for an informed decision about quality.
    Mostly automation is used as a change detector, we just want to know things work the way we left them. It is a good tool, we don’t want to check this every time ourselves, but it doesn’t give us new insight about the product. It only confirms what we already know or what we suspect might happen.
  • Speed sometimes gets in the way
    Speed is one of the coolest features of automation, 1000 checks executed within minutes. No human can do that, right? Speed is cool, but unreasonable speed is a problem, actually. Unreasonable speed will make your webdriver to click on elements that are not loaded, yet. You will have to add some sort of waits.
    In performance/load testing “think times” are used, in order to reproduce interaction sufficient for a human being, because tools can produce gazillion requests per minute, but this is not a realistic human behavior.
  • Lack of variability
    One of the principles of automation is consistent test executions, meaning – we want to make sure every time tests are run they do the same thing, assert the same conditions, start from the same initial state. Well, this comes with a trade-off, and it’s called variability.
    Of course, we can vary our tests in different ways – vary data, incorporate mechanisms like chaos monkey, faker and etc. But they will still vary a condition within some boundaries, the overall behavior of the tests remains the same.
    What I mean with all of these – automated check wont find new ways of testing, by gaining insight.
  • It will never detect unanticipated risks
    One of the key aspects of a good automated check is that it is deterministic – it checks for a very specific condition that can produce result of false or true, yes or no, 0 or 1.
    Having this nature in mind, it is hard for an automated check to detect unanticipated risk, unless it is directly involved in the workflow – page doesn’t load, parent class can not be initiated, connection lost to DB, etc. Your tests will be on focus, but just on stuff you specify.
  • It will only check what you tell it, nothing more
    Thinking of the nature of programming, and as mentioned before, automation is programming, what is it, what’s the purpose of programming? Well, it is giving the machine some instructions, is it? We write some “magic words” in our favorite syntax and then we debug the shit out of it to make it work. ๐Ÿ˜€ But best case scenario – we write something and the machine does what we want it to do. Therefore, we can not expect it to do anything else, unless:
    a/ we are not quite sure what it is doing
    b/ the machine has a concept of free will or willfully making a decision
  • Calculations with a predictable % of error
    Calculations are another area where humans totally suck compared to machines, but being involved in programming, we all know all computing devices have the inheritent problem with calculations or inherited expected errors in several areas (floating point calculations, calculation of big numbers, calculations with high precision). Of course, you will rarely work on NASA project to calculate the mass of giant black holes in your automation, but operations with finances and currencies conversion, could be severe enough to take a closer look at the precision of your checks.
See also  Software testing - Mythbusters style

 

Takeaways

I am sure many of you might wonder “Why is this idiot writing all this obvious stuff?”. Yes it is obvious, if you trusted your checks too much and you got a good lesson when they failed you (hence the “hindsight” part of the heading), but for the new comers to the craft all these are not obvious.

Automation isย  good and useful tool, but as every tool we should be aware why are we using it. Just like a chainsaw – if you know chainsaw’s purpose is to cut trees, most likely you wont consider shaving with it. Same with automation, if you know why you are using it, what is its purpose, this lowers the chance of you getting hurt or disappointed.

I’d love to hear about your stories, why are you using automation? What are its sitrong sides according to your point of view?

Any shares and tweets are highly appreciated. Thanks for reading! ๐Ÿ™‚

 

 

Please follow and like us:

Mr.Slavchev

Senior software engineer in testing. The views I express here are mine, they don't represent any position held by any of my employers. Experience in mobile, automation, usability and exploratory testing. Rebel-driven tester, interested in the scientific part of testing and the thinking involved. Testing troll for life. Retired gamer and a beer lover. Martial arts practitioner.

More Posts - Website

Follow Me:
LinkedIn

11 thoughts on “Hindsight lessons about automation: Why automation?”

    1. That’s an interesting approach, Burdette! Thanks for sharing it!
      Let’s see if I got it right – the purpose of a change report is to compare result of the current tests run with the previous tests run and look for consistent failures and or new failures?
      Something that’s interesting for me – what does the term “verdict” means – is it something specific for the Ruby framework you use?
      How do you decide the “blocked” verdict?

      1. Hi Mr. S.,

        First, terms: “Verdict” in my usage is a the outcome of a comparison of some sort (say, balance == 0). The verdict may be:
        * Passed: the comparison succeeded (balance was 0).
        * Failed: the comparison failed (balance was not 0).
        * Blocked: the comparison was not made because test execution did not reach that point (due to earlier failure of some sort, usually an uncaught exception).

        The purpose of the changes report is to allow us to safely ignore the (hopefully very many) unchanged verdicts, and instead focus on the details of the (hopefully few) changed verdicts, passed, failed, and blocked.

        If the verdict is absolutely the same as last time, we’ve already dealt with it, right? Opened a defect report, or whatever. It needs no new action.

        1. I thought about something like this many times. Would be nice to see the changed results, not just the actual results.
          In the best case the automation is linked to the issue tracker. The issue state (in progress, done by development) is compared with the ‘inner’ verdict and gives a ‘outer’ verdict (in progress + failed = passed with expected to fail).
          From QFTest i know a ExpectedToFail-Flag.

        2. That’s interesting, so it does compare old runs to the new ones.
          This is useful incase we are running tons of tests on daily/houry or any other basis, which makes it practically impossible to track all tests by ourselves. Anyway, I personally wouldn’t trust the chages report in order to safely ignore test results, but I believe it will be good additional source of information.
          The reason I woudn’t ignore:
          What happens if a test throws false positive? Our tests are far from perfect?
          What happens if we forgot to log a bug for a specific reason, we are not perfect as well.
          I believe it might work, if specific workflow of defect reporting and results updating is followed. So, if you are interested in providing a bigger picture preview on the process that includes the changes report, I’d be interested to read about it. ๐Ÿ™‚

          1. As you wrote it it highly relays on a consistent, and also flexible, workflow. And to use all tools as intended with KISS. And no personal finger pointing. This is a high level of transparence.
            Overall this should help us instead of restricting.

            “What happens if we forgot to log a bug for a specific reason, we are not perfect as well.”
            – In a more liberal version the connection per testcase to an issue tracker is optional.
            – If you really FORGOT: Bad boy! ๐Ÿ˜‰ It will reminde you to create a bug ticket.

            “What happens if a test throws false positive? Our tests are far from perfect?”
            – My overall purpose for this would to see and focus on new failing tests. To ignore for a while the not yet fixed ones. So i’m not hardly bound to that ExpectedToFail-Flag
            – a more common and practible solution is to have a history of the test reports and see since when a test is failing. Jenkins does this quite good.
            – a technical, if possible, solution would be to make the ExpectedToFail conditionally. A certain failure should be expected.

            Is this a bit bigger picture for you? What would you like to see in a bigger picture? Where i’m still to vague?
            The idea of connecting bug tracker with test reports is yet just a idea in my mind. I haven’t seen any implementation yet.

  1. I’d like to add another “gotcha” if I may:

    No one cheks the tests that are passing

    I found this is extremely dangerous in large suites. Everyone can handle a small test suite. But when the tests are piling up no one gives a second look at the ones that are already passing. And from various reasons they could be incomplete, false positives etc etc.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.