A tale of four metrics

A tale of four metrics

I’d like to share the story of a successful startup, whose engineering team more than doubled in the last year.

For the Leadership team and the managers, it became hard to have visibility on product deliveries. Their previous method of checking in to dailies and having ad-hoc conversations was no longer working due to the increase in the number of teams, members, and ceremonies. Their major concern was how could they make sure they would keep the pace and quality of their product.

One of the managers, who had read the Accelerate book, put together a collection of scripts that calculated the progress of the four keys metrics over the past year for her team. Immediately, she gained a thermometer that allowed her to flag if her team was progressing in the right direction, and the clarity to communicate it with her stakeholders.

Her dashboard was simple, with just four metrics:

  • Deployment frequency: How often her team completed a deployment to production;
  • Lead time: How long it took for a commit to be deployed into production;
  • Mean time to recover: How long it took them to recover from an outage;
  • Change fail rate: Percentage of deployments that caused an outage.


Illustration by Freepik Storyset

When she first looked at her dashboard, she was surprised by how big the MTTR was. It was too high for her liking. She waited for the next outage to see first-hand how the team would react. Quickly she understood what was going on. They had neglected documenting incident management as part of the onboarding for the new members. She provided training for the new members of her team and provided them with new tools so that they could respond to new incidents faster.

Another thing she found were big lead times for new services, that in theory, should have been better than the old legacy systems. The new services had better architectures, better standards, but that wasn’t being reflected in the pace of the delivery. After she followed the flow for one of the work items on one of those services, she discovered the bottleneck: the new services were not doing Continuous Delivery yet. Just that, meant that despite all the investment they had made into repaying technical debt wasn’t having an impact on the business. She immediately prioritized the creation of Continuous Delivery pipelines for those services.

She keeps a close eye on the key metrics to make sure that breaking down the legacy application is bringing them positive outcomes instead of just copying the spaghetti mess from one place and ending up with more, smaller spaghetti.

As she grew more confident in using metrics with her team, she wanted to introduce metrics that would incentivize them to keep iteration size small. She decided to start measuring how big the deployments were. Now, her team could reflect on the retrospectives by themselves about the impact of the size of iterations they defined and their deliveries.

On top of that, with the combination of the change size and Lead time she was able to present a convincing case that her teams would benefit from more investment in Continuous Deployment tooling.

I wanted to share these examples with those who are looking to help their teams improve through objective data. As Seth Godin puts it “Just because something is easy to measure doesn’t mean it’s important”, and metrics can be a double-edged sword, doing more harm than good, focusing your team on the wrong objectives. I’ll share some examples of that in a later post.

Making a difference is easy with the right data. If you are interested in measuring these metrics for your team, check our new product, pulse.

5 Likes