Following best practices or typing away blindly?
Deployment automation is quickly gaining ground. Backlog grooming is being perfected further. Code quality control and automated testing are improving, but still not fully adopted by most teams. These are just 3 of the lessons we learned from the yearly re-calibration of our development best practices benchmark.
Read on for some interesting stats and what they tell us about where software development is heading. Bonus at the end: a quick self-assessment to benchmark your own team.
What are development best practices?
For over 15 years, my colleagues and I at the Software Improvement Group (SIG) have been in the business of evaluating the quality of code, architecture, and development practices for our customers.
We started out with just giving an expert judgement, based on what we observed and what our own software-development gut told us.
As we collected more observations and gained more experience, we decided to whip out our scientific skills (🎓) and build a structured evaluation model. The model collects a series of checks on which development practices are applied by a given team, but also sets thresholds for when teams are applying those practices fully, partially, or not at all. We have now used this model for a number of years to provide teams with objective feedback and comparisons against peers.
Ten best practices for effective software development. We described our structured evaluation model for software development best practices at length in the @OReillyMedia book “Building Software Teams”.
Annual updates of the evaluation model
About once per year, we update our benchmark repository with the observations and evaluations we have collected throughout the past year and use this additional data to adjust the model.
This annual update is also an excellent moment to study the data and look for trends. Are we getting better at software development? We just finished the latest calibration, and here is what we learned about trends in development best practices.
Lesson #1: More teams apply deployment automation
We measure the practice of deployment automation by looking at whether teams have a deployment process in place that is quick, repeatable, and preferably fully automated.
For example, a team that deploys each new release with a single push of a button would receive a perfect score on this practice, codified as fully applied. But a team that needs to go through a limited number of well-documented manual steps would be scored as partially applied.
More teams fully apply deployment automation (43%, was 30%) and fewer teams do not apply deployment automation at all (11%, was 26%).
As the chart shows, more teams are applying deployment automation practices fully (43%, was 30%) and fewer teams do not apply deployment automation at all (11%, was 26%).
This is a significant positive trend that is mirrored by the trends in continuous integration (automatic compilation and testing after each change) and continuous delivery (automatic deployment after each change), as shown in the following charts.
Full or partial adoption of continuous integration (currently 68%) has significantly improved, but still lags behind compared to deployment automation (currently 89%). For continuous delivery, adoption has also improved significantly, but still has a long way to go (currently 29%).
Though the trends for these two practices are equally positive, their adoption still lags behind. And especially for continuous delivery, the great majority of teams (and the organisations of which they are part) still have a long way to go.
Lesson #2: Almost all teams groom their backlogs
Nearly all teams (95%, was 92%) maintain product and sprint backlogs and a significantly larger portion of teams applies this best practice fully (80%, was 71%).
The best practice of backlog grooming already enjoyed high adoption, with 71% of the teams diligently maintaining both Product and Sprint backlogs, and 22% doing so at least partially. As teams perfected their backlog grooming, full adoption increased to 80%. Only a small percentage of teams (5%, down from 8%) does not do any backlog grooming at all.
In fact, most Agile-Scrum best practices that we assess showed improvement, or stable high adoption. With one small exception:
More teams do not stick to the discipline of holding all meetings prescribed by Scrum (15%, up from 11%).
As the chart shows, fewer teams seem to stick to the discipline of holding meetings prescribed by Scrum (planning, daily standup, review, retrospective). This may not be a bad thing per se, as more experienced teams are encouraged to adapt their meeting rhythms to their own needs.
Lesson #3: Code quality control and testing are improving
Fewer teams are failing to enforce consistent code quality standards (20%, down from 25%). Fewer teams fail to run automated test at each commit (41%, down from 48%).
Quality control is an essential part of professional software development. Nevertheless, the best practices of code quality control and automated testing are still only fully applied by a minority of the teams.
To assess code quality control, we observe whether a team works with clear coding standards, systematically reviews code (against these standards and against principles of good design), and whether automated code quality checks are performed. Full adoption of code quality control has somewhat increased (31%, up from 23%), but still 20% of teams are producing code without adequate quality control in place.
To assess testing practices, we observe whether teams have an automated regression test suite that is being executed consistently after each change. Full adoption of this practice is increasing (33%, up from 29%). The percentage of teams that change their code without proper regression testing is declining rapidly but is still a staggering 31% (down from 48%).
Getting better?
So our question: Are we getting better at software development? can be answered: Yes, but at a modest pace.
For some practices, the needle doesn’t seem to have moved too much over the past year (e.g. documenting just enough, managing third-party components, using SMART requirements and goal-oriented metrics). I won’t bore you with the flat-lining charts.
We do see significant progress on a number of practices, especially deployment automation, continuous integration, code quality control, and automated testing. This is incredibly good news!
But we’re not there yet. Personally, I’m a bit shocked that less than 1 in 3 software development teams follow quality and testing best practices, since adopting these best practices can bring immediate benefits with limited effort.
Less than 1 in 3 software development teams follow quality and testing best practices
What can you do?
Joost Visser is CTO at the Software Improvement Group (SIG), Professor of Large-scale Software Systems at Radboud University, author of O’Reilly books Building Maintainable Software and Building Software Teams, and leading the Better Code Hub team at SIG.
Thanks to Lodewijk Bergmans for crunching and charting data!
Building Software Teams_Why does poor software quality continue to plague enterprises of all sizes in all industries? Part of the problem lies…_shop.oreilly.com