How to Start with Autotests

Written by sharovatov | Published 2022/04/08
Tech Story Tags: management | quality-assurance | autotests | code-quality | software-development | software-qa | qa-best-practices | software-testing

TLDRThis post is part 2 of my ā€œautotestsā€™sā€ series. It's a widely accepted consensus that automatic tests (and TDD) benefit the software development process in the following ways: saving time (reduces time to market) and improving team morale (mundane, repetitive tasks are among the sources of developersā€™ unhappiness) It's also impossible to measure team morale with numbers, so this route is useless too. We can show only two measurable things:Ā **onboarding cost**(time) and**time to market**via the TL;DR App

This post is part 2 of my ā€œautotestsā€ series.

First part is How do I Encourage my Manager to Support Automated Tests?

This article is not about the technical details on how exactly to write autotests, but rather about the approach one could take to ensure the autotests initiative is not rejected.


Once you've persuaded your manager to support automated testing, you need to plan it right ā€” the acceptance of the initiative doesnā€™t guarantee unlimited probation time.

Remember, step into the managerā€™s shoes: once having yielded to someoneā€™s plan, they still nurture the doubts and hesitations. Provided there are no noticeable (and by noticeable I mean measurable) results, those hesitations and doubts will cherish.

Letā€™s see what we can do with numbers.

It's a widely accepted consensus that automatic tests (and TDD) benefit the software development process in the following ways:

  • saves time (reduces time to market) [link]
  • improves quality and lowers total cost of ownership [link]
  • improves team morale (mundane, repetitive tasks are among the sources of developersā€™ unhappiness) [link]
  • reduces onboarding costs (developers have less fear of changing and breaking things)


Thereā€™s a consensus in the software development area thatmeasuring qualityĀ is a bad idea, so I wouldnā€™t recommend going that route.

Itā€™s also impossible to measureĀ team moraleĀ with numbers, so this route is useless too.

We can show only two measurable things:Ā onboarding costĀ (time) andĀ time to market.

However,Ā onboarding costsĀ will reduce noticeably only after some time while we want to show immediate results.

So we are left with showing howĀ time to marketĀ is reduced.

TTM comprises the time taken to pass all the steps required to develop and deliver certain functionality to the client.

The manual QA stage usually takes significant time: a feature awaits for the QA team to pick it up, and then the testing itself is done.

Also, the manual QA stage implies massive delays in a case when the feature is returned to development due to defects.

The reason for the delay is that when the feature is passed to QA, the developer usually picks a new feature to work on, and when the tested feature is passed back to development, it has to wait again.

If the developer picks the returned feature straight away, context switch costs apply.

Iā€™ve mentioned studies on context switching costs in myĀ code review limits to applicabilityĀ article:

There are multiple publications on how multitasking and switching contexts are ineffective:

Thereā€™s good scientific evidence that itā€™s more effective to work on one single task at a time with as few context switches as possible.


This proves that to reduce time to market (and show immediate results in this initiative), manual QA phase should be reduced as:

  • automatic tests run as soon as the developer runs them (0 waiting time)
  • automatic tests run quickly (much lower testing time) and therefore provide almost instant feedback, meaning that there will be no context switching cost.

Where to start

As we want to show our manager some immediate benefits, obvious idea would be to start with automating something that would show immediate results.

Iā€™d advise automating tasks that have significant ā€˜manual testingā€™ time.

Every task tracker can show you the time a task spends in the ā€˜testingā€™ phase, and if you use a test management system, you can get even more information.

Hereā€™s what I have in myĀ QaseĀ TMS:

As you can see, the obvious choice here would be to start automating ā€˜authorisationā€™ first as it takes 9 minutes to be tested manually every release.

These 9 minutes yieldĀ 8 hours a yearĀ spent directly on manual testing of this particular feature if we have one release per week.

As soon as we automate this check, we can show our manager that TTM is reduced by 9 minutes for each task. This will inevitably gain us more trust.

The more trust you have, the more you can do whatĀ you know is right, potentially something which cannot be ā€˜measuredā€™ or is not quantifiable.

DemingĀ calls ā€˜managing only whatā€™s quantifiableā€™ one of theĀ 7 deadly diseases of management:

  1. Management by use only of visible figures, with little or no consideration of figures that are unknown or unknowable.

But Iā€™ve rarely seen managers knowing of this, hence this whole approach of showing results and gaining trust to do something thatā€™s truly right.


So, to summarise:

  • you persuaded your manager to support autotests
  • you constantly show them immediate results
  • you gain trust to be able (and have resources) to develop proper testing strategy
  • you start doing whatā€™s right

To proceed with the proper testing strategy, first check Fowlerā€™sĀ document.


Written by sharovatov | Pragmatic humanist. 13 years of JS development, 7 years being a teamlead and consultant. Love sports and cats.
Published by HackerNoon on 2022/04/08