Category: research

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

design thinkingpsychologyresearch

Elevating Design & Design Thinking

 

ElevateDesignLoft2.jpgDesign thinking has brought the language of design into popular discourse across different fields, but it’s failings threaten to undermine the benefits it brings if they aren’t addressed. In this third post in a series, we look at how Design (and Design Thinking) can elevate themselves above their failings and match the hype with real impact. 

In two previous posts, I called out ‘design thinkers’ to get the practice out of it’s ‘bullshit’ phase, characterized by high levels of enthusiastic banter, hype, and promotion and low evidence, evaluation or systematic practice.

Despite the criticism, it’s time for Design Thinking (and the field of Design more specifically) to be elevated beyond its current station. I’ve been critical of Design Thinking for years: its popularity has been helpful in some ways, problematic in others.  Others have, too. Bill Storage, writing in 2012 (now unavailable), said:

Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design.

Bruce Nussbaum, who helped popularize Design Thinking in the early 2000’s called it a ‘failed experiment’, seeking to promote the concept of Creative Intelligence instead. While many have called for Design Thinking to die, it’s not going to happen anytime soon. Since first publishing a piece on Design Thinking’s problems five years ago the practice has only grown. Design Thinking is going to continue to grow, despite its failings and that’s why it matters that we pay attention to it — and seek to make it better.

Lack of quality control, standardization or documentation of methods, and evidence of impact are among the biggest problems facing Design Thinking if it is to achieve anything substantive beyond generating money for those promoting it.

Giving design away, better

It’s hard to imagine that the concepts of personality, psychosis, motivation, and performance measurement from psychology were once unknown to most people. Yet, before the 1980’s, much of the public’s understanding of psychology was confined to largely distorted beliefs about Freudian psychoanalysis, mental illness, and rat mazes. Psychology is now firmly ensconced in business, education, marketing, public policy, and many other professions and fields. Daniel Kahneman, a psychologist, won the Nobel Prize in Economics in 2002 for his work applying psychological and cognitive science to economic decision making.

The reason for this has much to do with George Miller who, as President of the American Psychological Association, used his position to advocate that professional psychology ‘give away’ its knowledge to ensure its benefits were more widespread. This included creating better means of communicating psychological concepts to non-psychologists and generating the kind of evidence that could show its benefits.

Design Thinking is at a stage where we are seeing similar broad adoption beyond professional design to these same fields of business, education, the military and beyond. While there has been much debate about whether design thinking as practiced by non-designers (like MBA’s) is good for the field as a whole, there is little debate that its become popular just as psychology has.

What psychology did poorly is that it gave so much away that it failed to engage other disciplines enough to support quality adoption and promotion and, simultaneously, managed to weaken itself as newfound enthusiasts pursued training in these other disciplines. Now, some of the best psychological practice is done by social workers and the most relevant research comes from areas like organizational science and new ‘sub-disciplines’ like behavioural economics, for example.

Design Thinking is already being taught, promoted, and practiced by non-designers. What these non-designers often lack is the ‘crit’ and craft of design to elevate their designs. And what Design lacks is the evaluation, evidence, and transparency to elevate its work beyond itself.

So what next?

BalloonsEgypt.jpg

Elevating Design

As Design moves beyond its traditional realms of products and structures to services and systems (enabled partly by Design Thinking’s popularity) the implications are enormous — as are the dangers. Poorly thought-through designs have the potential to exacerbate problems rather than solve them.

Charles Eames knew this. He argued that innovation (which is what design is all about) should be a last resort and that it is the quality of the connections (ideas, people, disciplines and more) we create that determine what we produce and their impact on the world. Eames and his wife Ray deserve credit for contributing to the elevation of the practice of design through their myriad creations and their steadfast documentation of their work. The Eames’ did not allow themselves to be confined by labels such as product designer, interior designer, or artist. They stretched their profession by applying craft, learning with others, and practicing what they preached in terms of interdisciplinarity.

It’s now time for another elevation moment. Designers can no longer be satisfied with client approval as the key criteria for success. Sustainability, social impact, and learning and adaptation through behaviour change are now criteria that many designers will need to embrace if they are to operate beyond the fields’ traditional domains (as we are now seeing more often). This requires that designers know how to evaluate and study their work. They need to communicate with their clients better on these issues and they must make what they do more transparent. In short: designers need to give away design (and not just through a weekend design thinking seminar).

Not every designer must get a Ph.D. in behavioural science, but they will need to know something about that domain if they are to work on matters of social and service design, for example. Designers don’t have to become professional evaluators, but they will need to know how to document and measure what they do and what impact it has on those touched by their designs. Understanding research — that includes a basic understanding of statistics, quantitative and qualitative methods — is another area that requires shoring up.

Designers don’t need to become researchers, but they must have research or evaluation literacy. Just as it is becoming increasingly unacceptable that program designers from fields like public policy and administration, public health, social services, and medicine lack understanding of design principles, so is it no longer feasible for designers to be ignorant of proper research methods.

It’s not impossible. Clinical psychologists went from being mostly practitioners to scientist-practitioners. Professional social workers are now well-versed in research even if they typically focus on policy and practice. Elevating the field of Design means accepting that being an effective professional requires certain skills and research and evaluation are now part of that skill set.

DesignWindow2.jpeg

Designing for elevated design

This doesn’t have to fall on designers to take up research — it can come from the very people who are attracted to Design Thinking. Psychologists, physicians, and organizational scientists (among others) all can provide the means to support designers in building their literacy in this area.

Adding research courses that go beyond ethnography and observation to give design students exposure to survey methods, secondary data analysis, ‘big data’, Grounded Theory approaches, and blended models for data collection are all options. Bring the behavioural and data scientists into the curriculum (and get designers into the curriculum training those professionals).

Create opportunities for designers to do research, publish, and present their research using the same ‘crit’ that they bring to their designs. Just as behavioural scientists expose themselves to peer review of their research, designers can do the same with their research. This is a golden opportunity for an exchange of ideas and skills between the design community and those in the program evaluation and research domains.

This last point is what the Design Loft initiative has sought to do. Now in its second year, the Design Loft is a training program aimed at exposing professional evaluators to design methods and tools. It’s not to train them as designers, but to increase their literacy and confidence in engaging with Design. The Design Loft can do the same thing with designers, training them in the methods and tools of evaluation. It’s but one example.

In an age where interdisciplinarity is spoken of frequently this provides the means to practically do it and in a way that offers a chance to elevate design much like the Eames’ did, Milton Glaser did, and how George Miller did for psychology. The time is now.

If you are interested in learning more about the Design Loft initiative, connect with Cense. If you’re a professional evaluator attending the 2017 American Evaluation Association conference in Washington, the Design Loft will be held on Friday, November 10th

Image Credit: Author

design thinkingevaluationinnovation

Beyond Bullshit for Design Thinking

QuestionLook.jpg

Design thinking is in its ‘bullshit’ phase, a time characterized by wild hype, popularity and little evidence of what it does, how it does it, or whether it can possibly deliver what it promises on a consistent basis. If design thinking is to be more than a fad it needs to get serious about answering some important questions and going from bullshit to bullish in tackling important innovation problems and the time is now. 

In a previous article, I described design thinking as being in its BS phase and that it was time for it to move on from that. Here, I articulate things that can help us there.

The title of that original piece was inspired by a recent talk by Pentagram partner, Natasha Jen, where she called out design thinking as “bullshit.” Design thinking offers much to those who haven’t been given or taken creative license in their work before. Its offered organizations that never saw themselves as ‘innovative’ a means to generate products and services that extend beyond the bounds of what they thought was possible. While design thinking has inspired people worldwide (as evidenced by the thousands of resources, websites, meetups, courses, and discussions devoted to the topic) the extent of its impact is largely unknown, overstated, and most certainly oversold as it has become a marketable commodity.

The comments and reaction to my related post on LinkedIn from designers around the world suggest that many agree with me.

So now what? Design thinking, like many fads and technologies that fit the hype cycle, is beset with a problem of inflated expectations driven by optimism and the market forces that bring a lot of poorly-conceived, untested products supported by ill-prepared and sometimes unscrupulous actors into the marketplace. To invoke Natasha Jen: there’s a lot of bullshit out there.

But there is also promising stuff. How do we nurture the positive benefits of this overall approach to problem finding, framing and solving and fix the deficiencies, misconceptions, and mistakes to make it better?

Let’s look at a few things that have the potential to transform design thinking from an over-hyped trend to something that brings demonstrable value to enterprises.

Show the work

ShowtheWork.jpg

The journey from science to design is a lesson in culture shock. Science typically begins its journey toward problem-solving by looking at what has been done before whereas a designer typically starts with what they know about materials and craft. Thus, an industrial designer may have never made a coffee mug before, but they know how to build things that meet clients’ desires within a set of constraints and thus feel comfortable undertaking this job. This wouldn’t happen in science.

Design typically uses a simple criterion above all others to judge the outcomes of its work: Is the client satisfied? So long as the time, budget, and other requirements are met, the key is ensuring that the client likes the product. Because this criterion is so heavily weighted on the outcome, designers often have little need to capture or share how they arrived at the outcome, just that they do it. Designers may also be reluctant to share this because this is their competitive advantage so there is an industry-specific culture that prevents people from opening their process to scrutiny.

Science requires that researchers open up their methods, tools, observations, and analytical strategy to view for others. The entire notion of peer review — which has its own set of flaws — is predicated on the notion that other qualified professionals can see how a solution was derived and provide comment on it. Scientific peer review is typically geared toward encouraging replication, however, it is also to allow others to assess the reasonableness of the claims. This is the critical part of peer review that requires scientists to adhere to a certain set of standards and show their work.

As design moves into a more social realm, designing systems, services, and policies for populations for whom there is no single ‘client’ and many diverse users, the need to show the work becomes imperative. Showing the work also allows for others to build the method. For example, design thinking speaks of ‘prototyping’, yet without a clear sense of what is prototyped, how it is prototyped, what means of assessing the value of the prototype is, and what options were considered (or discarded) in developing the prototype, it is impossible to tell if this was really the best idea of many or the one decided most feasible to try.

This might not matter for a coffee cup, but it matters a lot if you are designing a social housing plan, a transportation system, or a health service. Designers can borrow from scientists and become better at documenting what they do along the way, what ideas are generated (and dismissed), how decisions are made, and what creative avenues are explored along the route to a particular design choice. This not only improves accountability but increases the likelihood of better input and ‘crit’ from peers. This absence of ‘crit’ in design thinking is among the biggest ‘bullshit’ issues that Natasha Jen spoke of.

Articulate the skillset and toolset

Creating2.jpeg

What does it take to do ‘design thinking’? The caricature is that of the Post-it Notes, Lego, and whiteboards. These are valuable tools, but so are markers, paper, computer modeling software, communication tools like Slack or Trello, cameras, stickers…just about anything that allows data, ideas, and insights to be captured, organized, visualized, and transformed.

Using these tools also takes skill (despite how simple they are).

Facilitation is a key design skill when working with people and human-focused programs and services. So is conflict resolution. The ability to negotiate, discuss, sense-make, and reflect within the context of a group, a deadline, and other constraints is critical for bringing a design to life. These skills are not just for designers, but they have to reside within a design team.

There are other skills related to shaping aesthetics, manufacturing, service design, communication, and visual representation that can all contribute to a great design team and these need to be articulated as part of a design thinking process. Many ‘design thinkers’ will point to the ABC Nightline segment that aired in 1999 titled “The Deep Dive” as their first exposure to ‘design thinking’. It is also what thrust the design firm IDEO into the spotlight who, more than any single organization, is credited with popularizing design thinking through their work.

What gets forgotten when people look at this program where designers created a shopping cart in just a few days was that IDEO brought together a highly skilled interdisciplinary team that included engineers, business analysts, and a psychologist. Much of the design thinking advocacy work out there talks about ‘diversity’, but that matters only when you have a diversity of perspectives, but also technical and scholarly expertise to make use of those perspectives. How often are design teams taking on human service programs aimed at changing behaviour without any behavioural scientists involved? How often are products created without any care to the aesthetics of the product because there wasn’t a graphic designer or artist on the team?

Does this matter if you’re using design thinking to shape the company holiday party? Probably not. Does it if you are shaping how to deliver healthcare to an underserved community? Yes.

Design thinking can require general and specific skillsets and toolsets and these are not generic.

Develop theory

DesignerUserManufacturer.jpg

A theory is not just the provenance of eggheaded nerds and something you had to endure in your college courses on social science. It matters when it’s done well. Why? As Kurt Lewin, one of the most influential applied social psychologists of the 20th century said: “There is nothing so practical as a good theory.”

A theory allows you to explain why something happens, how causal connections may form, and what the implications of specific actions are in the world. They are ideas, often grounded in evidence and other theories, about how things work. Good theories can guide what we do and help us focus what we need to pay attention to. They can be wrong or incomplete, but when done well a theory provides us the means to explain what happens and can happen. Without it, we are left trying to explain the outcomes of actions and have little recourse for repeating, correcting, or redesigning what we do because we have no idea why something happened. Rarely — in human systems — is evidence for cause-and-effect so clear cut without some theorizing.

Design thinking is not entirely without theory. Some scholars have pulled together evidence and theory to articulate ways to generate ideas, decision rules for focusing attention, and there are some well-documented examples for guiding prototype development. However, design thinking itself — like much of design — is not strong on theory. There isn’t a strong theoretical basis to ascertain why something produces an effect based on a particular social process, or tool, or approach. As such, it’s hard to replicate such things, determine where something succeeded or where improvements need to be made.

It’s also hard to explain why design thinking should be any better than anything else that aims to enkindle innovation. By developing theory, designers and design thinkers will be better equipped to advance its practice and guide the focus of evaluation. Further, it will help explain what design thinking does, can do, and why it might be suited (or ill-suited) to a particular problem set.

It also helps guide the development of research and evaluation scholarship that will build the evidence for design thinking.

Create and use evidence

Research2Narrow.jpg

Jeanne Leidtka and her colleagues at the Darden School of Business have been among the few to conduct systematic research into the use of design thinking and its impact. The early research suggests it offers benefit to companies and non-profits seeking to innovate. This is a start, but far more research is needed by more groups if we are to build a real corpus of knowledge to shape practice more fully. Leidtka’s work is setting the pace for where we can go and design thinkers owe her much thanks for getting things moving. It’s time for designers, researchers and their clients to join her.

Research typically begins with taking ‘ideal’ cases to ensure sufficient control, influence and explanatory power become more possible. If programs are ill-defined, poorly resourced, focus on complex or dynamic problems, have no clear timeline for delivery or expected outcomes, and lack the resources or leadership that has them documenting the work that is done, it is difficult to impossible to tell what kind of role design thinking plays amid myriad factors.

An increasing amount of design thinking — in education, international development, social innovation, public policy to name a few domains of practice — is applied in this environmental context. This is the messy area of life where research aimed at looking for linear cause-and-effect relationships and ‘proof’ falters, yet it’s also where the need for evidence is great. Researchers tend to avoid looking at these contexts because the results are rarely clear, the study designs require much energy, money, talent, and sophistication, and the ability to publish findings in top-tier journals all the more compromised as a result.

Despite this, there is enormous potential for qualitative, quantitative, mixed-method, and even simulation research that isn’t being conducted into design thinking. This is partly because designers aren’t trained in these methods, but also because (I suspect) there is a reticence by many to opening up design thinking to scrutiny. Like anything on the hype cycle: design thinking is a victim of over-inflated claims of what it does, but that doesn’t necessarily mean it’s not offering a lot.

Design schools need to start training students in research methods beyond (in my opinion) the weak, simplistic approaches to ethnographic methods, surveys and interviews that are currently on offer. If design thinking is to be considered serious, it requires serious methodological training. Further, designers don’t need to be the most skilled researchers on the team: that’s what behavioural scientists bring. Bringing in the kind of expertise required to do the work necessary is important if design thinking is to grow beyond it’s ‘bullshit’ phase.

Evaluate impact

DesignSavingtheWorld

From Just Design by Christopher Simmons

Lastly, if we are going to claim that design is going to change the world, we need to back that up with evaluation data. Chances are decent that design thinking is changing the world, but maybe not in the ways we always think or hope, or in the quantity or quality we expect. Without evaluation, we simply don’t know.

Evaluation is about understanding how something operates in the world and what its impact is. Evaluators help articulate the value that something brings and can support innovators (design thinkers?) in making strategic decisions about what to do when to do it, and how to allocate resources.

The only time evaluation was used in my professional design training was when I mentioned it in class. That’s it. Few design programs of any discipline offer exposure to the methods and approaches of evaluation, which is unfortunate. Until last year, professional evaluators weren’t much better with most having limited exposure to design and design thinking.

That changed with the development of the Design Loft initiative that is now in its second year. The Design Loft was a pop-up conference designed and delivered by me (Cameron Norman) and co-developed with John Gargani, then President of the American Evaluation Association. The event provided a series of short-burst workshops on select design methods and tools as a means of orienting evaluators to design and how they might apply it to their work.

This is part of a larger effort to bring design and evaluation closer together. Design and design thinking offers an enormous amount of potential for innovation creation and evaluation brings the tools to assess what kind of impact those innovations have.

Getting bullish on design

I’ve witnessed firsthand how design (and the design thinking approach) has inspired people who didn’t think of themselves as creative, innovative, or change-makers do things that brought joy to their work. Design thinking can be transformative for those who are exposed to new ways of seeing problems, conceptualizing solutions, and building something. I’d hate to see that passion disappear.

That will happen once design thinking starts losing out to the next fad. Remember the lean methodology? How about Agile? Maybe the design sprint? These are distinct approaches, but share much in common with design thinking. Depending on who you talk to they might be the same thing. Blackbelts, unconferences, design jams, innovation labs, and beyond are all part of the hodgepodge of offerings competing for the attention of companies, governments, healthcare, and non-profits seeking to innovate.

What matters most is adding value. Whether this is through ‘design thinking’ or something else, what matters is that design — the creation of products, services, policies, and experiences that people value — is part of the innovation equation. It’s why I like the term ‘design thinking’ relative to others operating in the innovation development space simply because it acknowledges the practice of design in its name.

Designers rightfully can claim ‘design thinking’ as a concept that is — broadly defined –central, but far from complete to their work. Working with the very groups that have taken the idea of our design and applied it to business, education, and so many other sectors, it’s time those with a stake in seeing better design and better thinking about what we design flourish to take design thinking beyond its bullshit phase and make it bullish about innovation.

For those interested in evaluation and design, check out the 2017 Design Loft micro-conference taking place on Friday, November 10th within the American Evaluation Association’s annual convention in Washington, DC . Look for additional events, training and support for design thinking, evaluation and strategy by following @CenseLtd on Twitter with updates about the Design Loft and visiting Cense online. 

Image credits: Author. The ‘Design Will Save The World’ images were taken from the pages of Christopher Simmons’ book Just Design.

businesscomplexityevaluation

A mindset for developmental evaluation

BoyWithGlasses.jpg

Developmental evaluation requires different ways of thinking about programs, people, contexts and the data that comes from all three. Without a change in how we think about these things, no method, tool, or approach will make an evaluation developmental or its results helpful to organizations seeking to innovate, adapt, grow, and sustain themselves. 

There is nothing particularly challenging about developmental evaluation (DE) from a technical standpoint: for the most part, a DE can be performed using the same methods for data collection as other evaluations. What stands DE apart from those other evaluations is less the methods and tools, but the thinking that goes into how those methods and tools are used. This includes the need to ensure that sensemaking is a part of the data analysis plan because it is almost certain that some if not the majority of the data collected will not have an obvious meaning or interpretation.

Without developmental thinking and sensemaking, a DE is just an evaluation with a different name

This is not a moot point, yet the failure of organizations to adopt a developmental mindset toward its programs and operations is (likely) the single-most reason for why DE often fails to live up to its promise in practice.

No child’s play

If you were to ask a five-year old what they want to be when they grow up you might hear answers like a firefighter, princess, train engineer, chef, zookeeper, or astronaut. Some kids will grow up and become such things (or marry accordingly for those few seeking to become princesses or they’ll work for Disney), but most will not. They will become things like sales account managers, marketing directors, restaurant servers, software programmers, accountants, groundskeepers and more. While this is partly about having the opportunity to pursue a career in a certain field, it’s also about changing interests.

A five-year old that wants to be a train engineer might seem pretty normal, but one that wants to be an accountant specializing in risk management in the environmental sector would be considered odd. Yet, it’s perfectly reasonable to speak to a 35-year-old and find them excited about being in such a role.

Did the 35-year-old that wanted to be a firefighter when they were five but became an accountant, fail? Are they a failed firefighter? Is the degree to which they fight fires in their present day occupation a reasonable indicator of career success?

It’s perfectly reasonable to plan to be a princess when you’re five, but not if you’re 35 or 45 or 55 years old unless you’re currently dating a prince or in reasonable proximity to one. What is developmentally appropriate for a five-year-old is not for someone seven times that age.

Further, is a 35-year-old a seven-times better five-year-old? When you’re ten are you twice the person you were when you were five? Why is it OK to praise a toddler for sharing, not biting or slapping their peers, and eating all their vegetables and weird to do it with someone in good mental health in their forties or fifties? It has to do with developmental thinking.

It has to do with a developmental mindset.

Charting evolutionary pathways

We know that as people develop through stages, ages and situations the knowledge, interests, and capacities that a person has will change. We might be the same person and also a different person than the one we were ten years ago. The reason is that we evolve and develop as a person based on a set of experiences, genetics, interests, and opportunities that we encounter. While there are forces that constrain these adaptations (e.g., economics, education, social mobility, availability of and access to local resources), we still evolve over time.

DE is about creating the data structures and processes to understand this evolution as it pertains to programs and services and help to guide meaningful designs for evolution. DE is a tool for charting evolutionary pathways and for documenting the changes over time. Just as putting marks on the wall to chart a child’s growth, taking pictures at school, or writing in a journal, a DE does much of the same thing (even with similar tools).

As anyone with kids will tell you, there are a handful of decisions that a parent can make that will have sure-fire, predictable outcomes when implemented. Many of them are created through trial-and-error and some that work when a child is four won’t work when the child is four and five months. Some decisions will yield outcomes that approximate an expected outcome and some will generate entirely unexpected outcomes (positive and negative). A good parent is one who pays attention to the rhythms, flows, and contexts that surround their child and themselves with the effort to be mindful, caring and compassionate along the way.

This results in no clear, specific prototype for a good parent that can reliably be matched to any kid, nor any highly specific, predictable means of determining who is going to be a successful, healthy person. Still, many of us manage to have kids we can proud of, careers we like, friendships we cherish and intimate relationships that bring joy despite no means of predicting how any of those will go with consistency. We do this all the time because we approach our lives and those of our kids with a developmental mindset.

Programs as living systems

DE is at its best a tool for designing for living systems. It is about discerning what is evolving (and at what rate/s) and what is static within a system and recognizing that the two conditions can co-exist. It’s the reason why many conventional evaluation methods still work within a DE context. It’s also the reason why conventional thinking about those methods often fails to support DE.

Living systems, particularly human systems, are often complex in their nature. They have multiple, overlapping streams of information that interact at different levels, time scales and to different effects inconsistently or at least to a pattern that is only partly ever knowable. This complexity may include simple relationships and more complicated ones, too. Just as a conservation biologist might see a landscape that changes, they can understand what changes are happening quickly, what isn’t, what certain relationships are made and what ones are less discernible.

As evaluators and innovators, we need to consider how our programs and services are living systems. Even something as straightforward as the restaurant industry where food is sought and ordered, prepared, delivered and consumed, then finished has elements of complexity to it. The dynamics of real-time ordering and tracking, delivery, shifting consumer demand, the presence of mobile competitors (e.g., food trucks), changing regulatory environment, novelty concepts (e.g., pop-ups!), and seasonality of food demand and supply has changed how the food preparation business is run.

A restaurant might not just be a bricks-and-mortar operation now, but a multi-faceted, dynamic food creation environment. The reason could be that even if they are good at what they did if everything around them is changing they could still deliver consistently great food and service and fail. They may need to change to stay the same.

This only can happen if we view our programs as living systems and create evaluation mechanisms and strategies that view them in that manner. That means adopting a developmental mindset within an organization because DE can’t exist without it.

If a developmental evaluation is what you need or you want to learn more about how it can serve your needs, contact Cense and inquire about how they can help you. 

Image Credit: Thinkstock used under license.

behaviour changecomplexitydesign thinkingevaluationpsychology

Exploding goals and their myths

SparkWall_Snapseed.jpg

Goal-directed language guides much of our social policy, investment and quests for innovation without much thought of what that means in practice. Looking at the way ideas start and where they carry us might offer us reasons to pause when fashioning goals and whether we need them at all. 

In a previous article, I discussed the problems with goals for many of the problems being dealt with by organizations and networks alike. (Thanks to the many readers who offered comments and kudos and also alerted me that subscribers received the wrong version minus part of the second paragraph!). At aim was the use of SMART goal-setting and how it made many presumptions that are rarely held as true.

This is a follow-up to that to discuss how a focus on the energy directed toward a goal and how it can be integrated more tightly with how we organize our actions at the outset might offer a better option than addressing the goals themselves.

Change: a matter of energy (and matter)

goal |ɡōlnoun:  the object of a person’s ambition or effort; an aim or desired result • the destination of a journey

A goal is a call to direct effort (energy) toward an object (real or imagined). Without energy and action, the goal is merely a wish. Thus, if we are to understand goals in the world we need to have some concept of what happens between the formation of the goal (the idea, the problem to solve, the source of desire), the intention to pursue such a goal, and what happens on the journey toward that goal. That journey may involve a specific plan or it may mean simply following something (a hunch, a ‘sign’ — which could be purposeful, data-driven or happenstance, or some external force) along a pathway.

SMART goals and most of the goal-setting literature takes the assumption that a plan is a critical success factor in accomplishing a goal.

If you follow SMART, Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely) this plan needs to have these qualities attached to them. This approach makes sense when your outcome is clear and the pathway to achieving the goal is also reasonably clear such as smoking cessation, drug or alcohol use reduction, weight loss and exercise. It’s the reason why so much of the behaviour change literature includes goals: because most of it involves studies of these kinds of problems. These are problems with a clear, measurable outcome (even if that has some variation to it). You smoke cigarettes or you don’t. You weigh X kilograms at this time point and Y kilograms at that point.

These outcomes (goals) are the areas where the energy is directed and there is ample evidence to support means to get to the goal, the energy (actions) used to reach the goal, and the moment the goal is achieved. (Of course, there are things like relapse, temporary setbacks, non-linear changes, but researchers don’t particularly like to deal with this as it complicates things, something clinicians know too well).

Science, particularly social science, has a well-noted publication bias toward studies that show something significant happened — i.e., seeing change. Scientists know this and thus consciously and unconsciously pick problems, models, methods and analytical frameworks that better allow them to show that something happened (or clearly didn’t), with confidence. Thus, we have entire fields of knowledge like behaviour change that are heavily biased by models, methods and approaches designed for the kind of problems that make for good, publishable research. That’s nice for certain problems, but it doesn’t help us address the many ones that don’t fit into this way of seeing the world.

Another problem is much less on the energy, but on the matter. We look at specific, tangible outcomes (weight, presence of cigarettes, etc..) and little on the energy directed outward. Further, these perspectives assume a largely linear journey. What if we don’t know where we’re going? Or we don’t know what, specifically, it will take to get to our destination (see my previous article for some questions on this).

Beyond carrots & sticks

The other area where there is evidence to support goals is from management and study of its/ executives or ‘leaders’ (ie. those who are labelled leaders and might be because of title or role, but whether they actually inspire real, productive followership is another matter). These leaders call out a directive and their employees respond. If employees don’t respond, they might be fired or re-assigned — two outcomes that are not particularly attractive to most workers. On the surface it seems like a remarkably effective way of getting people motivated to do something or reach a goal and for some problems it works well. However, those type of problem sets are small and specific.

Yet, as much of the research on organizational behaviour has shown (PDF), the ‘carrot and stick’ approach to motivation is highly limited and ineffective in producing long-term change and certainly organizational commitment. Fostering self-determination, or creating beauty in work settings — something not done by force, but by co-development — are ways to nurture employee happiness, commitment and engagement overall.

A 2009 study, appropriately titled ‘Goals Gone Wild’ (PDF), looked at the systemic side-effects of goal-setting in organizations and found: “specific side effects associated with goal setting, including a narrow focus that neglects non-goal areas, a rise in unethical behavior, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation.” The authors go on to say in the paper — right in the abstract itself!: “Rather than dispensing goal setting as a benign, over-the-counter treatment for motivation, managers and scholars need to conceptualize goal setting as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision.”

Remember the last time you were in a meeting when a senior leader (or anyone) ensured that there was sufficient time, care and attention paid to considering the harmful side-effects of goals before unleashing them? Me neither.

How about the ‘careful dosing’ or ‘close supervision’ of activities once goal-directed behaviour was put forth? That doesn’t happen much, because process-focused evaluation and the related ongoing sense-making is something that requires changes in the way we organize ourselves and our work. And as a recent HBR article points out: organizations like to use the excuse that organizational change is hard as a reason not to make the changes necessary.

Praxis: dropping dualisms

The absolute dualism of goal + action is as false as the idea of theory + practice, thought + activity. There are areas like those mentioned above where that conception might be useful, yet these are selective and restrictive and can keep us focused on a narrow band of problems and activity. Climate change, healthy workplaces, building cultures of innovation, and creating livable cities and towns are not problem sets that have a single answer, a straightforward path, specific goals or boundless arrays of evidence guiding how to address them with high confidence. They do require a lot of energy, pivoting, adapting, sense-making and collaboration. They are also design problems: they are about making the world we want and reacting the world we have at the same time.

If we’re to better serve our organizations and their greater purpose, leaders, managers, and evaluators would be wise to focus on the energy that is being used, by whom, when, how and to what effect at more close intervals to understand the dynamics of change, not just the outcomes of it. This approach is one oriented toward praxis, an orientation that sees knowledge, wisdom, learning, strategy and action as combined processes that ought not be separated. We learn from what we do and that informs what we do next and what we learn further. It’s also about focusing on the process of design — that creation of the world we live in.

If we position ourselves as praxis-oriented individuals or organizations, evaluation is part of regular attending to the systems we design to support goals or outcomes through data and sensemaking. Strategy is linked to this evaluation and the outcomes that emerge from it all is what comes from our energy. Design is how we put it all together. This means dropping our dualisms and focusing more on integrating ourselves, our aspirations and our activities together toward achieving something that might be far greater than any goal we can devise.

Image credit: Author