Month: September 2018

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.