One of the most significant cultural shifts in the software development industry is the adoption of DevOps practices. DevOps brings together the worlds of development and operations to create a collaborative, agile, and automated approach to software delivery. This approach helps companies to release software quickly and efficiently, providing a faster time to market, improved customer satisfaction, and enhanced revenue.
To truly measure the success of a DevOps implementation, teams and organizations must rely on metrics. Metrics provide insight into how the project is progressing, identify areas for improvement, and help teams make data-driven decisions. One of the most widely used frameworks for DevOps metrics is the DORA metrics.
The DORA Metrics Framework
The DevOps Research and Assessment (DORA) metrics framework was developed to provide a set of indicators that can be used to measure the performance of software development teams. This framework includes four key metrics to measure DevOps success:
1. Deployment Frequency: The frequency at which code is deployed to production.
2. Lead Time for Changes: The elapsed time between code being committed and deployed to production.
3. Mean Time to Recover (MTTR): The time it takes to recover from a production incident.
4. Change Failure Rate: The percentage of changes that fail in production.
The DORA metrics are now the de facto measure of DevOps success for most, and there’s a consensus that they represent a great way to assess performance for software teams. However, when handling metrics, teams must always be careful to remember Goodhart’s law.
Goodhart’s Law and Metric Management
Goodhart’s law states that “when a measure becomes a target, it ceases to be a good measure.” In the context of DevOps, this means that when teams start to use the DORA metrics as targets to be achieved, it can lead to a range of unintended consequences. For instance, teams may prioritize achieving the metrics, even if it means sacrificing code quality and stability.
The Ambition of DevOps: Minimizing Release Delays
At the heart of DevOps is an ambition that teams never put off a release simply because they want to avoid the process. This means that the deployment frequency, lead time for changes, and MTTR (Mean Time to Recover) should be as low as possible. Achieving this ambition necessitates a fundamental shift in how teams develop and deploy software. However, achieving the ambition must not come at the expense of code quality, stability, and reliability.
The Pitfalls of Increased Deployment Frequency for Low-Performing Teams
Studies have shown that low-performing teams experience a significant instability when they attempt to increase their deployment frequency by working harder. Such teams need to address underlying issues in their development and testing processes before they can succeed at increasing their deployment frequency.
The Importance of Maintaining Release Quality
Reductions in lead times should result from an improved approach to product management and enhanced deployment frequency, not from a more relaxed approach to release quality that skips existing checks and avoids process improvements. DevOps success means delivering working software quickly, reliably, and securely, while ensuring that release quality remains a top priority.
Understanding Change Failure Rate as a Measure of Quality
The change failure rate measures the percentage of releases that result in a failure, bug, or error. This metric tracks release quality and highlights areas where testing processes are falling short. The change failure rate serves as a good control on other DORA metrics, which tend to push teams to accelerate delivery with no guarantee of concern for release quality.
The Need to Balance Accelerated Delivery with Release Quality
It’s important to remember that the DORA metrics are not standalone indicators of DevOps success. If your data for the other three metrics shows a positive trend, but the change failure rate is soaring, you have the balance wrong. Teams must balance accelerated delivery with release quality.
Using data from metrics to achieve proper balance
The real value of metrics lies in the ability to pinpoint areas for improvement and to consistently track performance over time. By using the DORA metrics framework, teams can identify which areas of the development process need improvement and work to enhance processes and remove bottlenecks. This way, metrics serve as a means to achieve a proper balance and optimize DevOps performance.
Treating Metrics as Targets: The Danger of Imbalance
If we think back to Goodhart’s Law and start treating metrics like targets, rather than indicators, teams may end up with a misleading sense of project progress, an imbalance between goals and culture, and releases that fall short of your team’s true potential. Teams must remember to use metrics as an indicator of progress, rather than blind targets they must pursue.
The Value of DORA Metrics in Demonstrating Progress and Business Value
When used properly, the DORA metrics are a brilliant way to demonstrate your team’s progress, and they provide evidence that you can use to explain the business value of DevOps. To derive the most value from the metrics, teams should aim to balance them with the overall goals and vision of the organization. The DORA metrics framework should be used as a guide, not as a set of targets to be achieved.
The DORA metrics framework provides an excellent starting point for teams to measure and optimize their DevOps processes. By measuring deployment frequency, lead time for changes, MTTR, and change failure rate, teams can accurately track progress towards DevOps objectives. However, teams must be careful not to treat the metrics as targets to blindly pursue. Instead, they should use them to identify areas for improvement and strike a balance between accelerated delivery and release quality. In this way, teams can leverage the DORA metrics framework to achieve long-term success and deliver real business value.