Image indicating Test Automation

I have been toying with Test Automation for the past 15 years and one thing that keeps coming up time and again is how to measure the success of test automation.

Initially, we used to measure % of test cases that were automated. We set a goal – X% of test cases that needs to be automated within a time period and then we used to go about measuring the progress, trend and feel good when we beat the goal.

We soon figured out that this had serious limitations. For example, we might have 70% test cases that were automated, but during a particular regression cycle, we might execute only 25% – because others were not relevant for that regression cycle. Pretty soon, we ended up with some interesting (read ‘heated’) discussions with stakeholders on how can 70% drop to 25% and so on. Test automation engineers used to complain that the stakeholders did not understand what was involved with test automation and regression testing. So, in the end – neither we nor the stakeholders were happy with the situation.

It does not have to be this way. Let us go back to the basics.

Why do we do Test Automation? To improve test efficiency.

Okay – so, what does that mean? It means that we can save effort, time and possibly cost as well.

Alright – if that is the value we are supposed to get from test automation – they why are we NOT measuring that?

Instead of measuring % of test cases that were automated, test automation metrics should focus on the savings we get from executing those test cases – in terms of effort and time?

It looks simple. But calculating the effort and time savings through test automation execution is anything but simple. For arriving at the savings –

  1. We need to know how much time and effort it takes to execute each manual test case
  2. We need to capture that somewhere
  3. We need to then map this to the actual test automation scripts that were run and then calculate this.

This will need some effort, collaboration with the manual test team to arrive at this.

Agreed. But then the % test cases that were automated was not bringing out the value that test automation was supposed to achieve, so it makes sense to change it to Test Automation Value metric.

We did just that and started measuring the test automation savings. Trust me, it was painful to collect this information – the test management tool that we were using did not readily support capturing the information, aggregating it and giving us the metric along with the trend. We had to rely on our good old friend MS Excel to arrive at this. Sometimes the savings were achieved over several test cycles.

But the results were worth the pain. Once, we shifted the metric to automation savings, we shifted the behaviour in the right direction. Since we calculated the effort and time savings on what we executed, test automation engineers heavily focused on automating the test cases –

  1. That were effort intensive, which gave the best savings
  2. That were business critical, since high priority test cases get executed more often during regression cycle
  3. Test cases that had to be executed across different configurations – OS, Browser, DB, device combinations – since they resulted in huge savings.

People tend to align their behaviours based on what gets measured. When we measure the right things, we enable the right behaviours. Please share what are the metrics that made better business sense and how it changed the behaviour in the right direction. Thanks.

– Shiva Jayagopal

Prev                                                                  Next