Saying Why: Persona-Based Metrics With User Stories
PUBLISHED IN
AgileThose of us who have participated in large-scale development and acquisition of software-reliant systems have seen instances where, for any given environment, a metrics program is defined by a list of metrics that must be collected. The implication of such an approach is that these metrics must be produced for the program to proceed effectively and improve when needed. The promised improvements could be to the system, the way the system is being developed, or the way development is being managed.
Such a list often tells program participants the required metrics, but may include little to no indication of why the metrics are needed. When there is any degree of organizational mistrust, the list of metrics can generate fears such as, “If I don’t show good performance on these metrics, my budget will be cut or work will be taken away from my part of the organization.” In this blog post, we discuss how reframing metrics as user stories can improve their relevance and utility while mitigating fear.
Why User Stories?
In many cases, metrics programs foster an excessive formalism that misplaces emphasis on the representation of information rather than on the information itself. An emphasis on producing a top-down, deterministic specification of graphs or other depictions of data that the metrics program requires—mandated by program management—can distract participants from the potentially useful information that the metrics reveal and illuminate.
The exercise becomes, for example, “I need to populate the big Excel graph for management” rather than “I need to learn about how things are going.” After the requirement to populate the graph is met, the willingness to identify other ways of analyzing data seems to diminish, and the opportunity to look at data creatively from multiple points of view can be lost. Discussions about the required delivery of metrics can focus too much on the data required to feed to a decision maker, as if those participating in the discussion are outside of the decision process and have no real stake in it.
In reality, different attributes of the same phenomenon can have different levels of importance over time and different significance, depending on the role of the person observing them. In particular, production of metrics as a bureaucratic, box-checking exercise constrains the ability to observe these attributes from multiple points of view or even seek out different data.
Metrics programs need greater engagement and buy-in from those who participate in them. User stories can help; they put development in the context of the person who is using the system and naturally force a conversation about why?
Applying User Stories in Metrics Programs
With the advent of Agile development, one advancement in the statement of requirements was the introduction of user stories. User stories not only define what the system should do, but also define who wants a particular function and, importantly, why they want that function. This definition is usually expressed as
As a <role> I need the system to <perform some action> so that <I achieve some goal>.
The extra information, the who (role) and the why (goal), provides developers with deeper insight into the desired functionality. The analogy to metrics is clear: We can extend the definition of each metric with the person who wants the metric and their intended purpose. We suggest that metrics should be expressed as
As a <role> I need <this measurement> so that <I achieve some goal (e.g., inform a decision)>.
When cast as a user story, a metric becomes an expression of the context in which the person who is going to use the measurement system must operate and what their priorities are. A user story focused on the persona of someone who is going to consume metrics can beget many different implementations, but it naturally invites engagement of the user in the search for solutions.
Let’s consider some of the advantages of such an expression:
- Every metric has a consumer and a purpose. No longer are we collecting metrics just because we’ve always collected these metrics, or merely because someone in authority says we must. Moreover, it explains to newcomers to the organization why a metric is being collected. This is particularly important in organizations with high turnover.
- It recognizes that not every role in an organization needs every metric. For example, a software developer may be interested in code coverage provided by the test suite as may the testers and other quality engineers. Such a metric, however, is typically of lesser interest to program-management personnel, who are more likely to be concerned with progress to plan, cycle times, and defect counts.
- It provides everyone with a stated usage of the metric and deeper insight into the metric’s desired functionality. If the metrics are used in the stated fashion, the stated usage will dispel the fear created when people don’t know why the metric is being collected. The converse to this advantage is also true: If the metric is used in some unstated manner, people’s trust in the organization will be eroded.
- It allows for tuning the metrics program over time as people in the various roles can ask for information not currently provided. In particular, there is a logical and clear way of expressing needs for the new metric. Similarly, those people can state that a given metric is not helpful to them, in which case it can be dropped.
There is, however, one crucial difference between user stories and our metrics stories. The former represent a piece of functionality to be built, and the stories can be closed once the functionality has been developed. The latter represent an ongoing activity in the management of the program and will only be closed when it is determined that there is no longer a need for the metric. The consequences of this difference are that we do not expect metrics stories to appear in the backlog (though there may be an enabling story to implement data collection) and that metrics stories will not appear in burndown/burnup charts.
Goals and Needs
The idea of using goals to inform a metrics program is far from new. The Goal/Question/Metric (GQM) approach was documented in the mid-1980s and has been a popular approach. After appropriate planning, the first session is typically a formal brainstorming workshop in which goals for the metrics program are elicited. These goals are typically high level, but may also be seen as a constraint, as all subsequently derived questions and metrics must tie back to the original goals. In contrast, casting metrics as user stories where the goal is part of the story allows people at all levels of an organization to express their needs. Should a list of the goals of the metrics program be a necessary artifact, affinity analysis, coupled with abstraction of the individual stories, can be used to express high-level goals.
When emergent needs for metrics are encountered, it can be hard to accommodate them in a strict top-down framework that ultimately maps all questions and metrics to a goal network tracing to the highest abstractions of the organizational purpose. While logical connections to the ambitions of the division, enterprise, and market segment are legitimate, the intellectual effort to document them sometimes strains the focus of the most local decision maker to answer to a model of performance in which they play only a small part. Rather than insist on a potentially academic exercise of mapping to branch-, division-, and corporate-level goal statements, employing user stories to specify data, decisions, and the roles that need them contributes to a shorter cycle of validation and implementation for metrics.
Likewise, the challenge of motivating ownership for the work of collecting and using metrics is aided by the focus on individual personas. The needs of these personas operate within a larger framework of performance that can be understood at different levels of abstraction (in the organization, in the product architecture, and in the timeline of a given product line). The allegiance to those larger abstractions, however, is not the primary source of the argument for legitimacy of the metrics specified. One of the biggest challenges we’ve observed in practice is the view that metrics are “things we have to collect to appease someone else,” and typically that other role is in some way above or outside the sphere of control of the person collecting and reporting the metrics.
The legitimacy of the role and the decisions required to get the job done are more directly validated by the persona-based metrics specified with user stories. The eloquence of the mapping among goals, questions, and metrics should no longer be the primary basis for judging how correct the metrics are. Simply stated, the metrics must serve the decision maker at hand. Many will legitimately argue that the GQM paradigm was always intended to achieve this outcome, as the goals and questions must surely come from the same motivations reflected in personas and the decisions they make. In practice, however, when an external facilitator organizes a workshop, the top-down focus is often the primary source of legitimacy for the outcomes, and the steps followed to establish the set of metrics are often optimized from the top-down perspective.
Caveats, Cautions, and Potential Pitfalls
We have worked with programs to implement the persona-based user stories for metrics that we describe in this post, and we offer here some caveats and cautions based on our experience:
- Not all information needs identified in user stories can easily or profitably be converted to collectible metrics. The template, I need x to achieve y, has a tendency in practice to elicit from people a large number of general questions such as, “Who knows about this thing that I need?” Such a question is not easily converted to a metric. To keep the exercise of eliciting user stories manageable and useful, the program should maintain a strict focus on metrics so that the exercise is not overwhelmed by the expression of unfocused and unrelated needs. A facilitator who can skillfully identify truly important needs that can be expressed as quantifiable metrics will be helpful here.
- Not every combination of roles, information needs, and decisions begets something useful. Without strong facilitation, the generation of user stories can become just a theoretical exercise and wander off into territory that no one cares about.
- In time- and resource-constrained environments, it can be challenging to get people to own problems. We have seen people resist enthusiastic participation in the exercise of generating user stories because they fear that doing so will add to their already daunting workloads. Likewise, they fear that they lack time on their calendars to do the additional work that they are afraid the exercise will require. We believe that including the role as part of the user stories makes it easier for people to express the metrics that they actually care about, rather than having to care about the entire metrics program. If it turns out that a metric they choose is not helpful, the Agile mindset of adjusting what we do based on learning experiences should help them get over the hurdle of admitting that the first iteration didn’t work out and trying a new metric to suit their needs.
The Why of a Metric
We are currently using persona-based metrics with user stories in a number of government programs, and feedback so far has been positive. One of the participants, when using this approach commented, “We so often forget the ‘why’ of a metric.” Moreover, in contrast to a typical GQM session (which can be exhausting for the participants), eliciting user stories enables meeting for short periods of time, harnessing the energy of the group, and then dispersing before fatigue sets in.
Our experience to date has been that writing metrics as user stories is an effective way to collect the metrics needs of the various members of the program offices to shape the overall metrics program. Likewise, it enhances communication among different branches of a large program, thereby promoting rapid dissemination of good ideas. Anecdotally, we had one person, when looking at the who and why of a metric that another person was consuming, say “I want that too.”
Additional Resources
Read the SEI blog post, Agile Metrics: Assessing Progress to Plans.
Read the SEI blog post, Agile Metrics: Seven Categories.
Read the SEI blog post, Agile Metrics: A New Approach to Oversight.
Read other SEI blog posts about measurement and analysis.
Watch the SEI webinar, Three Secrets to Successful Agile Metrics.
Read other SEI blog posts about Agile.
PUBLISHED IN
AgileGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedGet updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed