You are viewing our old blog site. For latest posts, please visit us at the new space. Follow our publication there to stay updated with tech articles, tutorials, events & more.

How to Reduce Test Automation Flakiness

0.00 avg. rating (0% score) - 0 votes

We all have heard the story of the Boy Who Cried Wolf, let’s just revisit it.

“A shepherd-boy, who watched a flock of sheep near a village, brought out the villagers three or four times by crying out, “Wolf! Wolf!” and when his neighbours came to help him, laughed at them for their pains. The Wolf, however, did truly come at last. The Shepherd-boy, now really alarmed, shouted in an agony of terror: “Pray, do come and help me; the Wolf is killing the sheep”; but no one paid any heed to his cries, nor rendered any assistance. The Wolf, having no cause of fear, at his leisure lacerated or destroyed the whole flock. There is no believing a liar, even when he speaks the truth.”

There is nothing worse than tests that “cry wolf.” Just like the boy in the story no one trusts flaky code.

One of the most important things about test scripts is making sure their results are deterministic. Non-deterministic test results are unreliable too. There is no point in writing zillions of test scripts when a bug goes uncaught due to the flakiness in our automation code.

Before moving forward let’s get introduced with the topic.

What does flakiness actually mean..??

Flaky tests pass or fail unexpectedly for reasons that appear random.

So basically our tests are said to be flaky if we have variance in test results even when:-

  • The AUT (Application under test) is unchanged.
  • The automation code is unchanged.
  • The test environment is unchanged.

How bad is Flakiness..?

Flaky tests grow worse as our test suite expands and a lot of application areas are covered. So flakiness in test automation is a major roadblock in automation suite maintenance and eventually may lead to compromised build. Our efforts go in vain when just because of flakiness we are unable to detect a valid defect and it goes unnoticed.

In the era when we are focussing on Continuous integration system, our test suites are run automatically after each build is deployed. The test scripts play a vital role in determining the same. If the automation code is flaky then we won’t be able to be to deduce whether the failed cases are actually a valid bug or just a false positive. As quoted rightly by Alice Nodelman in GTAC (Google Test Automation Conference) ‘Testing is the key to releasing high quality software and automating ensures that testing is repeatable, reliable and fast’. Each one of these merits of automation is shadowed because of our flaky tests. Flaky tests are one of the biggest hurdles in maintaining reliable automation frameworks too.

So for now as we have discussed the topic and have got an insight into the same, let’s dig further and have more clarity into it.

Below shown snapshots will help you understand the topic practically.

11 test result snap 2

It’s quite evident from the above test results that they are misleading and eventually if this variance grows, we lose our confidence in the test suites developed by our very own efforts. Hence by now you must have realized how dangerous this Flakiness is…!!!

Always rule out whether it’s a Bug or Flakiness…!!

2

Consider we have four production servers. Each of them has a same code. Now after a release we have added a new feature into the code but somehow the newly added feature could not be deployed on one of the application servers. When the tests are run for the first time just by chance all the hits were made on the servers having the new feature and not on the one having the old code. So as the results say the pass percentage would be 100% .We become happy that the code is alright and take a nap….but suppose we again run the test script for the second time and alas the results says something else and leaves us restless scratching our head that what’s wrong with the test scripts.

3

So now the most important thing is to determine whether it is the flakiness or a real bug and as evident from the scenario this was a bug and may have gone undetected if we had not analysed the variance in test result.

Common complaints that we often come across while doing automation:

  • Mouse hover not working
  • Unable to click a button which in not present in the viewport
  • Script sometimes throws exceptions like NoSuchElementException
  • Scripts executes properly in standalone mode but fails in parallel mode

The million dollar question — What causes the tests to become flaky in nature …???

The reasons are many but the most common and prominent ones are:-

  • Badly written tests
  • Test not starting in a known state
  • Different implementations of drivers e.g. Chrome Driver’s implementation is different from that of Firefox driver
  • Network issue(slow/intermittent behaviors of the internet connection)
  • Interdependent tests
  • Same resource is being using by various threads

Now that we have recognised the main causes let’s talk about the solutions:

Handling Flakiness – Test case approach

  • Make independent test cases:

We must create test cases which are independent of each other. Dependant tests are the most common causes of flakiness. Two or more test cases should never be dependent on each other. This also makes our test scripts robust and fool proof hence deterministic and of course reliable. Following this approach will never lead to any incorrect failures of tests cases.

  • Test cases should be as granular and focussed as possible:

Our test cases should be as small as possible as this will greatly reduce the flakiness. Writing short and focused tests will ensure that our tests are reliable and efficient.

  • Define optimal locators:

We should always write the optimal locators for the elements as this ensures robustness and efficient behaviour of code. On the contrary if we use the non-optimal locators it will increase our test overhead and also the xpaths may become incorrect when some of the dependant locators change in the DOM. Hence we must use the optimal locators in our script.

Handling Flakiness – Having Proper Framework

  • Frame work should handle synchronization issues:

A proper framework clearly helps us achieve uniformity in code and also helps in proper maintenance of test suites. For ensuring proper handling of synchronization issues our framework must have a good handling of the same.

Framework handling flakiness due to synchronization issues.

Framework handling flakiness due to synchronization issues.

 

  • Framework should handle issues due to different driver implementations:

Webdriver interface is implemented by other drivers like Firefox driver, Chrome driver, IE driver etc. They have different implementations depending upon the usage and behavior of the respective browsers. So a proper handling should be done in order to remove the flakiness in the code. A depiction is shown below.

Handling Flakiness - Framework to Handle different driver implementation.

Framework handling flakiness due to different driver implementation.

Handling Flakiness – when scripts are executed in parallel mode

The major steps to avoid flakiness during parallel execution are:

  • Avoiding usage of shared resources for multiple test cases
  • Making test cases independent
  • Restarting grid machines at least once in a month
  • Termination of all open browsers before executing any suite, this will ensure proper execution of suite without any interference and lag

Choosing proper combination of browser versions and selenium jars:

Just as a proper recipe is needed to prepare a delicious dish in the same way we must choose proper and most compatible versions of selenium jars and the respective browser versions in order to make flaky-less automation code. Incorrect browser and jar combination leads to test case failure without obvious reasons.

We hope that our learnings will help you make your test more robust and reliable.

Happy Automation!!