Software testing is not… finding bugs part 4.

Software testing is not…

Software testing is not … part 2.

Software testing is not … part 3.

Software testing is not … finding bugs

This is one of the first things that I thought in software testing – that our job is to find bugs or defects. It’s true, but it took me some time, couple of books and a lot of hours reading different resources in order to figure out it’s not just that. To claim the above is like claiming that software development is just about writing code and it’s not.

At first, it is good to mention that we don’t automagically find bugs just like that. We have to learn the system and build our oracles based on our knowledge of it, previous experience, knowledge of the domain, technology knowledge and so on. So, no bugs included here.

How about documenting our process – writing models of the system, mind mapping it, write check lists or design test cases (in case that works in our context, of course). Is that finding bugs? No.

In other words, if we say that testing is about finding defects, we probably never had a more profound thought on what testing is and what is its role in the big picture.

Let’s talk about defects. What is a defect? To us, as testers, normally it’s an error in the code, may be a typo, may be a logical misunderstanding, but might be also bad  interpretation of requirements, or due to missing requirements, it might be “taking the easy short way” due to laziness, might be bad design or unexpected dependency. It might be a lot of things. What it is for our management? It is a way to lose money, a way to lose credibility, a way to lose clients, or combination of all of them, or even go out of business. What does a defect means to our clients? It means they can’t use the service or product they paid for, or have a risk to lose their clients and their money due to a mistake that we made, sometimes, defect on our side might cost someone’s life. So, here you see three different perspectives of what a defect is and I bet you can think of more.

So, what’s the point here? Testing is not about finding bugs, it’s about examining the risks that defects in our software might introduce. Who is impacted by these risks and how serious will be the impact. I believe this is a better way in determining what our job is, not just in our day-to-day tasks, but in the context of our business, how our job is valuable for what we are producing as end product. In other words, it helps us think of our work holistically.

… meant to add value

This reminds me a silly statement somewhere over the internet, that you see, testing is not necessary or important, because it doesn’t add value to the product. Well, that’s almost as stupid as saying that breaks are not necessary in a car, because they don’t add horse power.

From my point of view, testing was never meant to add value to the product, in fact one of the main goals of testing is to help the product and project from devaluation or in other words, loosing quality and our clients’ credibility due to defects that we might have missed.

So, to frame this in another different way – testing won’t help you to make a valuable product – that is something your ideas and strategy should take care of, but testing will help you not to fail miserably due to low quality and bugs that might have serious impact for your or your clients.

That’s for this week. Thanks for reading and if you found it interesting, please share in social medias and/or share your opinion in comments! 🙂

Software testing is not … part 3.

Links to the previous two parts:

Software testing is not…

Software testing is not … part 2.

Software testing is not activity that could be performed by irreplaceable robots.

I can say for sure that this is one of the most incorrect statements in testing and also one of the most toxic ones. Of course no body is saying that, no body is that dumb, but it is based on a false assumptions and it produces a lot of other false assumptions which become popular for the only reason that someone trusts them blindly, without asking questions. So, I split these assumptions in two groups:

  1. Everyone can test, testing is easy. Your manager can test, the dev can test, the designer can test, the client can test, your managers dog can test, etc, etc… But, they can’t do it, because they are busy doing their important stuff, so you seem to be the guy/girl who has to do it.
  2. Testing is easy, because a well written test is described in a finite number of steps or instructions, and if these instructions are followed exactly you can test the problem without any impediments.

Now, what I want you to do is – write down these on a piece of paper, bring it outside and burn it, while chanting some evil spell. 😀

Believe it or not, the above statements somehow made it in our craft and are even institutionalized, meaning a lot of people take them for granted. Which leads to a whole  bunch of false assumptions that derive from these, such as:

  • Every step in testing needs to be documented in a script or a test case.
  • If you don’t have test cases, you are doing testing wrong.
  • If you don’t have test cases, you can’t prove that you did any job on the project.
  • Testing is expensive.
  • Testing is unnecessary.
  • Testing can be performed automatically, so we reduce cost.

It’s easy to understand why these myths made it into our craft – one reason is, the senior management doesn’t really care about testing, they care about budgets and efficiency. Another reason is – we fail to provide the true story behind testing, how it is efficient and beneficial for the product/project, if we fail to do that, it’s our fault. And third reason – there are way too many vendors of tools and software testing courses and academies that have interest to sell you “snake oil” – the ultimate solution that will solve all your testing problems, whether it is a tool or methodology, it doesn’t matter.

Why testing can’t be performed by irreplaceable robots, and here I mean both human robot-like acting specialist and actual automation? One good reason for that is the fact that testing isn’t based on instructions, it’s based on interaction, simultaneous information gathering, evaluation and action according to that information. Testing is experiment, investigation and decomposition of statements that the others (development, management, you name it) believe are true(paraphrasing James Bach). Using all this, we have to provide relevant information, about the risk and the product itself. So, how all that aligns with the “concept” of following instructions?

And there’s one more general reason why testing can not be represented as a limited set of connected steps or instructions, which causes also the impossibility to fully automate testing – converting these instructions to machine instructions. The reason is the way knowledge works. More specifically the impossibility to convert tacit knowledge into explicit or even explicable knowledge(Collins, “Tacit and Explicit knowledge”).

This might be explained very simple – we all done cooking, sometimes, right? Well, we all know a cooking recipes are sort of algorithmic, step-by-step guide how to make a dish X. And they could be written by a really good expert, like a master-chef, for example. Yet, following his advice, even to the smallest detail, you might be able to make a decent dish, but not one made by the expert. You wouldn’t notice the small tweaks that the master chef will make, depending on his environment or the product that he/she uses, or the freshness of the spices, all based on a “gut feeling”. Why is this so? We will call that experience or routine, what it really is –  the huge underwater part of iceberg of knowledge, called tacit knowledge.

Same thing is applicable to any area, testing as well. “We can know more than we can tell” (Polanyi), therefore in testing we do much more than we can actually put in words, scenarios, cases, steps, scripts or you name it. Testing is an intellectual process and it has organic nature, it is normal to be inexplicable and for us to have trouble describing it or documenting it. That’s how knowledge and science work.

The reason why this is not a shared belief in the mainstream testing is the fact that many consultants and certification programs like to give a short and exciting story on “how can you become great tester by following this simple set of rules”. And it  is “sold” to us under the cover of the words “best practices”, meaning “I have nothing to prove to you, this is what everyone is using”. Well, newsflash, this is the testing community, you do a statement here, you have to be ready to defend your position, no body cares how much of a guru or an expert you are.

That’s it for this part, would like to read you comments and reactions on the social media. Thanks for reading. 🙂

Software testing is not … part 2.

Software testing is not … part 1.

I am moving towards next couple of branches from the mind map that I posted in first post of the series of “Software testing is not…”. You can find the mind map following the link up here.

Software testing is not … quality assurance.

I wrote about it in the “Quality guardian” part of Outdated testing concepts, so I will really try not to repeat, but build on the statements made there.

I believe it is more correct to say that quality assurance is not limited to the testing role alone and it’s something that has to be a philosophy and strategy shared by the whole development team, no matter what role. The opposition that I want to distinct clearly here is the old concept of quality guardian, having the right of veto to the release versus a modern approach of team responsibility towards quality. It is obvious that the former model doesn’t work even though many organisations still support the myth.

I’d rather like to think of quality as collective responsibility and I want to emphasize on the term collective responsibility, not shared responsibility. Why do I want to make that important distinction? If we share the responsibility, we may intentionally or not, give bigger part of it to a single member or make another less responsible. Collective responsibility require a collection of all team members’ personal responsibilities on the project. This way, if one person fails to be quality driven and delivers a crappy work, his part of the collection will be missing, therefore not all criteria for collective quality will be present. Also, we are able to determine quickly where things go wrong.

To summarize, we are really not in the position to give a god like decisions on whether or not the product should go live, whether or not a bug is critical enough to hold a release etc. But that isn’t necessarily a bad thing, because we can do so much more. Testing is about being aware of the products risks and providing correct information about it. Therefore, we can be quality advocates, but we are not actually assuring it by ourselves. And I’d like to present to you a quote from James Bach’s paper the Omega tester:
You are part of the process of making a good product, but you are not the one who controls quality. The chief executive of the project is the only one who can claim to have that power (and he doesn’t really control quality either, but don’t tell him that).”

And if you are interested in that topic, Richard Bradshaw made an awesome video in his series “Whiteboard testing”, where he explains his view-point on that same issue, you can watch it here: I’m QA. It’s in QA. It’s being QA’ed. You sure about that?

Software testing is not … breaking the product.

I will never get tired repeating what Michael Bolton has to say on that same topic and it is: “I didn’t break the software. It was already broken when I got it.”

It is funny, I know most people say this in ironical way, but yet that sort of speaking is one of that “toxic” phrases that actually do us no good and harm our craft along with stiff like “playing with the proiduct”, “just checking if it works”, “finding bugs”, but I will cover those in the next branches.

We are not actually breaking the product in the meaning of – causing damage that was never there and changing the consistency of the logic of the SUT. Our job is to expose inconsistencies in the logic, misbehaviour, false assumptions in logic or more generally – problems that occur while the SUT is performing its job. So, as you see, there’s no demolition squad, no bombing, no gunfire, we are not that bad, what is our job is – investigation, we have to investigate such behaviour, document the process that causes it and determine what are the reasons why we believe it is actually incorrect and raise the awareness of people who are interested in fixing it or the ones that will be affected if it continues to exist.

That’s it for these two branches of the mind map, please feel free to add your opinion in the comments and share it with your followers if you liked it. Thanks for reading. 🙂

Some kick ass blog posts from last week #12

And we are going back to some kick ass blog posts from the last week:

Another round-up posts:

Automate the planet’s Compelling Sunday by Anton Angelov.

That’s it for now, see you next week. 😉

Results for State of testing 2016 survey are live.

Hello everyone,

I was a contributor for this years State of testing survey. You might see the posts in social media and on my blog to go to the official site and fill in the survey. In case you didn’t, you can now go and see what you missed to be part of 😀

The state of testing survey is an annual survey that Joel Montvelisky and Tea time with testers magazine make and it is by far one of the valuable things that happen every year in testing. So, I think we really owe these guys a big thank you for the great job they do.

The survey itself provides tons of interesting facts that professionals in software testing shared from all over around the world. Stuff like:

  • What title does testers have in their company?
  • Size of testing teams?
  • How testing teams are distributed around the world?
  • What percent of tests are automated/unit tests/integration tests?
  • What are the percent of adoption of CI?
  • What testing techniques are used among testers?
  • What valuable lesson did they learn during last year?  And many more…

So, enough talking from me, you go see it on your own.

Download State of testing 2016 results here.

On trolls and men. Summary of a lightning talk.

I had my first talk at a conference hooray!

It was exciting, it was interesting and it was challenging for me. So, having in mind my inability to write short stuff and the way I always add more and more, my lightning talk preparation really helped me master a new skill and not try to squeeze 30 min lecture in 5 mins.

The conference is called QA challenge accepted and is a local conference about software testing in Bulgaria, where I am from. There’s a lot to say how exactly the idea of the testing troll came to me and what were the previous versions of the talk that I rejected, but that will be sometimes in future.

The purpose of this post is to make  a summary of the talk and a bit of an analysis of the parts where I’ve messed up. So here are the slides.

There was a short introduction that I made which was something like:

I am here to present for you a couple of advises from my good friend, mentor and spiritual guide – The testing troll.

Wisdom #1: The testing troll doesn’t follow “best practices”

The testing troll is a strange animal, once I asked him: “Why don’t you follow the best practices in testing? Everyone uses them, they are approved by the community?”. And he answered something like that:

“See, every time I hear the words “Best practices” I recall when I was a little troll, there was this guy on the TV, wearing golden trinkets and rings. He was selling this magic frying pan, where you could fry fish, after that a meatball, after that eggs and tastes wont mix. Finally you can bake some milk in it and just wipe with a piece of paper and take the pan back to the drawer like nothing ever happened.”

Note: The above might not be comprehensible for anyone who didn’t grew up in Bulgaria, but in my childhood there was a commercial show, selling crappy goods and that’s one part of it. 
So, much like the frying pan, “best practices” are probably a useful tool, but not for all cases. After all, in life in general, there’s no such thing that works in all cases, there are methods and strategies that work, but only being applied in the right context. Each one of us should decide on his/hers own, what methods and practices to use.

Plus, as the testing troll says, “we have to use it because everyone does” is a really dumb argument.

Wisdom #2: The testing troll is not a manual tester, nor automation.

The testing troll doesn’t like the labels “manual” and “automation” tester. He isn’t manual, because isn’t only using his hands and when asked why he isn’t automated, he responded: “None of my body parts is automated, I am pure organic.”

Note: Organic, by that I actually wanted to make a joke with all the organic buzz that is out there right now, but as I think of it more and more, “testing is organic” and I can add some serious proves about it, in a future blog post. 

The main problem, he thought, is that both labels lay on the false assumption that testing is a process that could be performed by anyone, and that testing is process that could be defined in a finite sequence of steps.

And this is not true – testing is a mental activity and being a mental activity it is constructed by mental sub-activities – exploration, analysis, evaluation, selection of strategies and methods for action, application of strategies and actions, evaluation and analysis of the results.

The testing troll believes that any type of testing is performed in the mind and depending on the context, any good tester could decide if he or she would perform it using tools or not.

Wisdom #3: The testing troll treats tools like tools.

The testing troll likes going to testing conferences, to meet the community and learn new stuff. Unfortunately, people don’t talk about testing in testing conferences any more, they talk about tools – how to configure them, how to use them.

The testing troll doesn’t understand people’s passion to replace human testing with a machine one. Machine testing could only replace certain actions, but not interaction, as with the testing of a skilled human tester.

Note: I totally messed here and I forgot what I was about to say. I skipped the rest of this part and moved to next slide, so I don’t waste time and screw up totally. 🙂

As a result, many shallow and inaccurate checks are thrown out there, trying to make us believe that quantity could replace quality, totally missing the fact it doesn’t in any way improve quality of testing, or the information that we obtain, by doing it.

After all, instead of extending their abilities with tools, people are actually shortening them, waiting for tools to do the work, instead of them.

Wisdom #4: The testing troll knows about certification.

One day, while sitting in his cave and testing, someone knocked on the cave door. It was some travelling salesman from some federation.”Come to a certification course”, he said “you will be able to become super giga mega testing troll within three days, plus we are going to give you a certificate about that. And it’s going to cost you just 99.99”.

And the testing troll thought a little bit. All the other testing trolls in his homeland, Troland, have the certificate for being super giga mega certified testing trolls, how does that make him different, then. Plus, every time he went to an interview, people were interested in what he could really do, not if he had a certificate. Also, the cave was small, there was no additional place for another piece of rock to place the certificate on, plus everyone who got inside get eaten, so any way. He sent him away.

Note: Troland is a name I made up to make it funnier, by just combining “troll” and “land”. It’s not meant to mock any real country’s name or insult anyone. 

Wisdom #5: The testing troll believes testing is exploratory by nature.

The testing troll believes that any type of testing is exploratory. Its purpose is to help us expand our understanding about the product. Each test we are performing is a valid scientific experiment, which we perform against hypothesis we have. In software testing we call such hypothesis – testing oracles. Each test is interesting only when it gives us new information. Repetition of an experiment could be of interest to us only when we want to test the system for its internal consistency – if same input conditions lead to the same output results.

A test or experiment wouldn’t be helpful if it doesn’t provides us with information which we could apply in our further testing.

That was my lightning talk. It was meant to be provoking, amusing and share some opinion, hope it worked and based on the feedback it was interesting for the audience. Of course, there’s so much more to improve in doing lightning talks for me, but it was a challenge definitely.

Thanks for reading. 🙂


Software testing is not…

Software testing is not … part 2.

Software testing is not … part 3.

Software testing is not… finding bugs part 4

Software testing is not … playing. Part 5

Software testing is not … easy to explain. Last part.

This post might be considered  a follow-up to Outdated testing concepts, but with it I’d like to switch the perspective to a bit more general view.

After the above mentioned series were published, I went through a lot of feedback, comments and other non-related articles and they made me think. How do anyone, mostly non-testing professionals feel competent to give opinion on what software testing is? What’s the reason about it? Well, my opinion is that many testers lack the ability to tell a compelling story about their job(and that includes me, in perspective), we often neglect the need to explain what we do in two ways: we either switch to our software testing lingo and get people confused or frustrated or on the opposite, we agree with what other people say software testing is, which is most commonly brought down to – executing test cases, looking for bugs, breaking the product, playing with the product etc.

How do we talk about testing?

This article doesn’t aim to blame anyone, it’s more of a self reflection of the way I was explaining to non-testers about software testing, until now and what I believe many testers say, when they are asked what do they do. So let’s say you meet a friend from school and you haven’t seen each other in 10 years and he happens to be a doctor, for example, and asks you what you do and you say you are a software tester. In the past, in my experience this conversation looked like this:

Friend: Software tester? That sounds cool, what do you do?
Me: Well, it’s kind of like being a programmer, but I am actually testing the code, not writing it.
Friend: But why do you do it, why does it need to be tested?
Me: Because, we are all human, we make mistakes, programmers as well, I need to take a look at their work and  see if the product we create together works as expected.
Friend: How do you know what is expected?
Me: Well, we have requirements created by our clients and we test it against them.
Friend: And what if it doesn’t work as expected?
Me: In this case, we found a defect, we log it in defect management tool, the programmers fix it and we test again,  to see if fix really worked or if didn’t introduce any other defects.
Friend: Ahh, I see… I think I got it.

I know this conversation might look silly, but I did really had this kind of conversations, so this is kind of a compilation of all such talks that I had. The point that I am trying to make here is, we talk about software testing so often by only mentioning only things like: requirements, “kind of like programming”, finding defects etc. and  in the same time we have so many things left off, that are essential to the quality of testing. And who if not us, should be able to build correctly the vision of software testing in other’s eyes and minds. That’s why I am getting really pissed off by people who are often not involved in testing, but trying to school testers what to speak and how to articulate concepts in testing.

Software testing is not…

And this is how I got to this idea. I want to try and figure it out for myself. What software testing is not, what are all these people telling us that is not true and not relevant to testing professionals and most importantly, what software testing is, which will be the topic of the second part of this series. I believe the part of what software is, is the part that gets often omitted and leads to misinterpretation about testing and it’s nature.

So, I made a mind map and I will try to briefly cover what software testing is not, in this chapter. Some of the things, might cover with Outdated testing concepts, so I will run through them quickly.



software testing is not - mind map

Easily performed by anyone.

This is a lie, and I am totally comfortable to call it a lie. In your experience as a tester you will get feedback from non-testers pretending to be “testing” the product. That will be your project manager, your devs, your CEO, designers, anyone will try to make you believe that testing is a task that everyone can do. They are not to blame, more often they are not doing it with bad intent, it’s our job to set the boundaries of what testing is and who can do it.

There’s two sides that have interest in this lie – one is the companies selling pseudo knowledge, that try to make you believe you can become skilled software tester overnight. The other interested side of this lie are the companies that are outsourcing people and threat them as irreplaceable components. They will try to make you believe that talent and expertise don’t matter as long as you write down your tests scripts in the most detailed way, so the next “clicking monkey” could perform everything you did and hopefully get the same success rate as you.

All of this is wrong, and I encourage you to speak up and tell people who believe one of the above claims, that testing is a structured activity that has a purpose and is not a mindless hitting on keys, that there’s way much more than just playing with the product.

In fact, my challenge for you is, every time you are involved in a project, where you and another “pseudo tester” are testing, please, make sure to show non-testing related people the value of true testing. Make them understand the difference between a tester and a non-tester doing testing (still keeping in mind, I don’t consider the latter one “testing”). Make them understand what a skilled professional is capable of by providing the vital information about risk, information of the product on functional and para functional level, provide the better reports, the better test documentation, the better explanation how the product works and most important, learn to sound like an expert, don’t shy out and don’t let people tell you how are you supposed to test, if they are not related to testing directly. There’s one competent person in software testing and that’s you.
Also see:Outdated testing concepts #1 – Anyone can test.

Fixed in a specific time frame.

If you tell this to your management it will probably give them a heart attack, unfortunately it’s true. Let’s try to recall what is testing all about – it is about gathering information and it is about evaluating risk. When your manager or project manager or stake holder comes to you and asks you “Is there any bugs?”, do you think he or she really cares about the bugs? What they are really trying to ask you, in my opinion, is “If we release the product like this tomorrow, is there a risk for our clients? Is it possible for them to reveal a problem that will harm our credibility as a service or product provider?”.

Having that said, we can also repeat again that testing is activity that can only “show the presence of bugs, but not their absence”. Therefore, aiming to release with zero bugs is utopic idea, you will release with bugs, no matter what, so you better know and not lose your fucking mind when that happens. What can we do better is to know what are these bugs and be aware if they are critical for our clients or not.

From everything said up to this point, it seems estimating time for testing is a wild guess and yes it is. Here are a couple of reasons:

  • Since exhaustive testing is impossible, we can’t run “all the tests”.
  • Do we know what system holds as information, before we start the process of testing? We might have requirements, but are we sure they are up to date, before we run into an inconsistency?
  • Do we know how many defects are there in the system, until we actually discover them?
  • Do we know each defect’s complexity, until we try to document it?
  • Can we anticipate the appearance of regression bugs?
  • Can we predict the changes in parts of the system where we don’t expect them, due to the actions of inexperienced developer or bad system architecture (strong coupling between components, lack of unit tests, lack of following recommended development practices) ?

What can we do when we are asked for an estimate, then? It will be a wild guess, but let’s keep it our little secret, ok? People in management and marketing, people working with digits and pie charts, they don’t like uncertainty. What we can do as testers, though is make sure they understand that testing is process on its own, it has different phases and it shouldn’t get mixed with the development process, e.g. the estimated time for the development isn’t the time in which the product is going to be ready to release, we should make the best effort to inform our co-workers that testing takes time and we shouldn’t try to fit in a minimal time frame that was left for us, but rather ask for a dedicated time for testing.

There are ways in which we can try to minimize the uncertainty in our prediction on how long testing would take:

  • Experience is one for sure, the more we test, the more capable we are in order to guess the time to test a system, based on its complexity.
  • Which brings us to our next point – knowing the system, the more we know about it, the easier it will be for us to guess the time to test it.
  • Knowledge of the scope of the change – which is directly related to the two above, we need to have profound knowledge of the system and be aware what part of it will be affected by the change in order to estimate time for testing.
  • And of course, knowing our own process – this is important as well and it should be our first concern, we should know how we will approach the changes, how many meetings we will have to attend to, in order to gather our initial data, how much time we need to write our documentation, if we use one, what documentation will we write, etc, etc.

So, this is for now. The topic is complex and the article becomes lengthy, that’s why I will split it in parts and try to provide explanation of another branches of the mind map in couple of days.

Until then, every comment and opinion on the topic is well come. If you liked it, please feel free to share it in social media. Thanks for reading. 🙂


Some kick ass blog posts from last week #11

Hello, here’s the new portion of kick ass blog posts:

  • Software testing isn’t just a set of skills that we all read about in testing books and white papers, there’s a large variety of skills that we primarily don’t relate to testing, but might benefit our testing in a great way. Simon Knight made a great point about it in his post here:
    7 Things Awesome Testers do That Don’t Look Like Testing
  • Another great post by Michael Bolton in the series “Oracles from the inside out”. In this article Michael is talking about “conference” as a process in trying to reach shared understanding with the rest of the team. You can see the whole article here:
    Blog: Oracles from the Inside Out, Part 9: Conference as Oracle and as Destination
  • Another interesting and inspirational blog post by Simon Knight on writing a blog post. In it, Simon gives his view on a simple plan he follows when trying to write compelling content. You can see the full article here:
    Write powerful blog posts with this simple template
  • And if you are interested in the topic of how people are writing their great content, I encourage you to read Mike Talks’ article, which is inspired by the one Simon wrote:
    WRITING 106 – A scientific template for writing a blog article…
  • I love reading automation posts that aim to teach testers something new and new ways to improve their testing abilities and I am happy to say, Bas Dijkstra is always helpful in that matter. His recent post teach us how to write readable test code, something like recommended coding practices for testers. Which, in my experience is something that is often neglected. The full article you can read here:
    Three practices for creating readable test code
  • Great point from Katrina Clokie on making testing visible and letting other members of the team know what testing is actually about. You can review the whole post here:
    Use your stand up to make testing visible
  • Not to miss an important event “Dear Evil tester” by His Evil Testerness, Alan Richardson, is out, go and download it. By far I am like 10 % in it, really at the beginning, but I love the portion of dark, sarcastic humor that “Dear Evil tester” offers. I will keep you updated with my opinion on it. Until then, you can do it on your own:
    “Dear evil tester” on LeanPub
  • New issue of the testing magazine “Tea time with testers” is out. Don’t ask me what’s in it, I had no time to check it out, yet. Yes, I am human, laws of physics and time apply to me, too. 🙂 You can review the February issue here:
    Tea time with testers – February 2016

Other roundup posts: 

Automate the planet’s – Compelling Sunday. 

That’s it for this week. See you next week! 🙂

Some kick ass blog posts from last week #10

Hey there guys yaaay 10 kick ass blog posts already, I can’t believe I did something 10 times consistently without failing at least once. Here’s the list of posts for this week:

  • Jeff Nyman with a great post about WebDriverJS and the use of call backs and promises, really interesting if you are into JavaScript:
    WebDriver in JavaScript with Promises
  • Great talk from Test Bash NY 2015 by Keith Klain  on the lessons learned on selling software testing. It is a great opportunity to see the perspective of a test managers who tries to drive his team based on the CDT principles and all the lessons he learned by doing it. Not only that, Keith addresses many issues within the CDT community that we need to  work at. Great, inspirational and definitely a must-watch:
    Lessons Learned in (Selling) Software Testing – Keith Klain
  • Awesome post by Dan Ashby, explaining again that the role of automation in testing should be supplementary and not as a replacement of human testing activities. Dan made a great model of the testing and checking concepts and how they work together. Awesome post, I strongly recommend it:
    Information, and its relationship with testing and checking
  • Great news again, another software testing book is on the way, by Alan Richardson this time. “Dear Evil tester” is its name. What it is about and when to expect it, you can see for yourself here:
    Announcing “Dear Evil Tester” coming soon, and why I wrote it
  • I really recommend taking a look at Brendan Connolly‘s new post on ego, apathy and test cases. Interesting analysis with a little bit of philosophic of psychological taste. You can find the whole post here:
    Ego, Apathy, and Test Cases
  • And one last thing, that I found out, not a testing topic, but part of my other passions – hacking and security. We all know the Tor Browser and how everyone looks at it as the single option of being unrecognizable in internet, since the information that we all know is gathered by some agencies and the social media. Turns out, it’s not only the network security that we have to look out for, but there’s other smart tricks to identify user behavior. In this post the author explains how mouse motion and scrolling actions, can be tracked to patterns in creating a digital fingerprint, with which user could be identified online. It is a really interesting article:
    Advanced Tor Browser Fingerprinting

Other roundup articles: 

Automate the planet’s Compelling Sunday. 

Some kick ass blog posts from last week #9

Hey there, here’s the new portion of kick ass blog posts from the previous week:

  • Really amazing start of this weeks roundup and really good post by James Thomas. It shows in a wonderful way how opposition in science and testing can drive us to reconsideration of our positions and stating our ideas more clearly. Definitely a must read:
    Bug-Free Software? Go For It!
  • Another interesting post by Albert Gareev on accessibility testing and the fact that tools might only cover small part of the process that a skilled tester performs, related to accessibility assessment. Automated tools and UI mock-ups in early stages of testing might provide some help, but confidence is build only through expert analysis and taking a closer look, even at a mark up level. You can see the full post here:
    What’s in a label?
  • This is a really interesting webinar by Rex Black, busting some myths about exploratory testing. It is interesting from the perspective of being thought-provoking or even argument provoking. Unfortunately I wasn’t able to hear it all, I accidentally navigated out of the page and found I can’t forward the player to the point I was previously at(which is great user experience, by the way). Anyway, I will probably spend the time to hear it all and share  my thoughts in a separate post:
    Webinar: Myths of Exploratory Testing: 2/24/16
  • And here are part 2 and 3 of James Thomas’ transcript of a talk he had on testing and joking, he called it “Joking with Jerry”
    Joking With Jerry Part 2
    Joking With Jerry Part 3
  • The February issue of Testing Circus magazine is out with great topics from Mike Talks a great interview with Rosie Sherry and much more compelling articles on testing. You can download it here:
    Testing Circus February edition.

Some other roundup posts:

Automate the planet – Compelling Sunday. 

That’s it for this week, guys. See you next week.