If you had data to predict what the winning number of the lottery would be, would you still choose a number because it looks pretty or ugly? In your projects you have data to predict and decide how to act based on empirical data, so flee from intuition.
Still today you hear phrases like "Scrum... isn’t that a hippy invention?". These comments usually come from people accustomed to working according to traditional methods of software development, creating huge specification documents and intuitive planning, accustomed to seeing how those plans fail, to try to equate software development with the construction of a car or a building.
In an environment characterized by high uncertainty and constant and unpredictable changes, an empirical method must be used that, through constant deliveries, allows us to obtain the information and knowledge necessary to make decisions. This knowledge arises from the comparison between a hypothesis and the actual result obtained after delivery.
Therefore, the product strategy to be followed in Agile is to define an MVP (Minimum Viable Product), compare the results according to the pre-established strategic metrics (KPI's), with the assumptions we have applied, correct them and re-iterate with the increase of knowledge obtained.
Comparing it with the Scrum.org approach, the strategic metrics of which we spoke earlier, we could call them direct metrics, since they have a direct application on the value that we want to generate: economic benefit, social benefit, improvement of the company's valuation...
These metrics inform us of the status of the strategic objectives of the company, but by themselves they do not give us enough information to know where we should put the focus to improve them. To do this, we must rely on circumstantial metrics, which are metrics that only add value through their aggregation and balancing.
When we compare the direct metrics with the circumstantial ones which come from every possible aspect of the development process... you will no longer require a fortune teller to know what to do.
You can even make measurements of aspects forgotten by the companies, such as the benefits of investments in training, sponsorships or improvements in the work environment.
We are going to name some metrics grouped following Jason Tice's approach:
- Metrics on the health of the process. Focused on evaluating the activities required for delivery and the changes that occur during the entire development process: cumulative flow chart (allows us to see and compare at a glance the lead time, cycle time, WIP, size of the backlog...), control diagram, efficiency flow, time in progress vs time in process, block time per item...
- Metrics on deployments. Focused on the continuous delivery process: detected errors and resolution time, deployment time, successful deployment rate, stabilization time of a release, time between deployments with new functionality, cost per release, adoption ratio of a release/installation ratio.
- Metrics on the development of the product. They help to measure the alignment between the new functionalities developed with the real needs of the users: delivered business value, NPS, burndown risk, push/pull costs, product forecast, user use analytics.
- Metrics on the code. To measure the quality of the architecture and the code: unit test coverage rate, time needed to create the package, failure density, code complexity, unavailability ratio...
- Metrics on the equipment. Metrics on human factors, work environment, team commitment: humor/happiness/morality of the team, Gallup Q12, learning log, team seniority, transparency (customer access, data, how learning is shared, successes and failures)...
It depends a lot on the nature of the business. The same numbers of a metric can be positive in one specific business and negative in another.
- Choose only those circumstantial metrics that help you get a direct metric. What change do you expect to cause? How could it be misinterpreted or perverted?
- Choose the correct frequency with which you must take the data and the expiration of the experiment. How will you know that you no longer need it?
- Always use them as trends, not as isolated numbers.
- Never use a single metric. The team will comply with it, disregarding other aspects of the project. New defects that previously did not occur will be created.
- Avoid useless metrics (vanity metrics): in general, those that focus on measuring the individual instead of the global team (lines of code per individual, individual points...) that lose focus on real value (number of new functionalities) deployed).
- Maintain a holistic vision, reviewing the data as a whole.
- Take into account your context, do not compare metrics applied on tasks with those applied on user stories, epics or bugs. Much less make comparisons between teams.
- Choose the metrics by consensus with the team, which are not imposed.
- Share information with stakeholders as soon as possible.
From Scrum.org they propose the Evidence-Based Management for Software Organizations (EBMSO), where they establish a specific framework of metrics that an organization dedicated to offering services through software should follow.
According to their point of view, the survival and success of an organization depends on 3 direct metrics or Key Value Measures (KVMs) that, in turn, rely on other circumstantial metrics:
- Current value: It indicates the current value of the organization in your market. It gives us a context, but it is not valid to know its value in the future..
- Income per employee: total income / number of employees.
- Product cost ratio: Expenses in improvements (tools, coaching, events...).
- Employee satisfaction:% of satisfied/dissatisfied employees.
- Customer satisfaction:% of satisfied/dissatisfied customers.
- Time to market: Time it takes the organization to launch new features, services and products...
- Frequency of deployments: time between deployments that provide new functionalities.
- Stabilization of deployments: the use of bad development practices causes the need for time to make corrections, which also increases over time.
- Cycle time: time that passes until a functionality is delivered to the end user, including stabilization.
- Innovation capacity: It is considered a luxury in some organizations, but it is the opposite. Maintaining features that are not used requires spending a large part of the budget to maintain them instead of looking for new ones:
- Index of installed versions: How many users use the latest version compared to those who still use the maintenance versions.
- Usage index: % use of each functionality.
- Ratio of innovation:% of budget dedicated to innovation.
- Errors: % of bugs compared to the previous version.
We have reviewed the type of metrics, tips for applying them and even a specific application strategy. Your company is perfectly designed for the results you are getting.
If you want to improve them, do not expect things to just happen, you have everything you need to design the system. Set your goals and start measuring now!