Tag: design

behaviour changecomplexitydesign thinkingevaluation

Innovation, Change and The Leopard

Innovation is a misunderstood and often misrepresented concept that can provoke fear, indifference, resentment, confusion, or irrational exuberance. To understand the reasons why we can’t ignore innovation we need to look no further than the sage advice in The Leopard, a story of change.

It’s been said that the only true constant is change. Funny that something so constant and pervasive — change — invokes such strong reactions from people. It’s partly why innovation can be such a contentious term. Whether we like it or not, the dynamics of change in our world are forcing us to recognize that innovation is not a luxury, it’s more than just a means to competitive advantage, it’s increasingly about survival.

New Canadian research (PDF) looking at citizens attitudes toward innovation and their perception of it suggests there is much to be done to understand what survival and innovation mean for our collective wellbeing. Before getting to that, let’s first define innovation.

There are many definitions of innovation (see here, here, and here for some), but let’s keep it simple:

Innovation is doing something new to generate value

Innovation is effectively a means to create change. Design is the discipline and practice of how we create change intentionally. While change is often thought of something that takes us from one state to another, it is also something that can help us preserve what we have when everything else is changing around us. To help understand this, let’s look at a lesson from The Leopard.

Change Lessons from The Leopard

One of my favourite quotes comes from the Italian novel The Leopard by Giuseppe Tomasi di Lampedusa where one character (a young nephew speaking to his aristocratic uncle who seeks to preserve the family’s status) says to another:

If want things to stay as they are, things will have to change

The Leopard (translated from Italian)

I’ve written about this quote before, looking at the psychology of organizations and the folly of fads in innovation and design thinking. It’s about change, but really it’s about innovation and survival.

While individual humans are pretty resilient in the face of changing conditions, organizations are not so easily adaptive. Our family, friends, neighbours, and maybe governments will look after us if things get really bad (for a while, at least), but there are few looking after organizations.

For organizations – non-profit, profit-seeking, and governmental alike — innovation is the means to improve, to adapt, or maintain the status quo. It’s no longer a luxury — it’s about survival. Just as a real leopard has spots that are part of it’s form to survive, so too must we consider examining what kind of survival mechanisms we can build into our forms. That means: design.

Survival, by Design

A recent survey of 2000 Canadians by the Rideau Hall Foundation looked at Canadians attitudes toward innovation and whether Canada was creating a culture of innovation.


A culture of innovation is one where the general public has shared values and beliefs that innovation is essential for collective well-being

David Johnson, Governor General of Canada

There are many flaws with this study. There’s a reason I defined what I meant by innovation at the beginning of this post; it’s because there’s so much confusion surrounding the term, its meaning, and use. I suspect that same confusion entered this study. Nevertheless, there are some insights that are worth exploring that may transcend the context of the study (Canada).

One of these is the tendency among young people (age 18-25) to view innovation as something more likely to be generated from individuals. There is also differences in perceptions between men and women about who is best suited for innovation. What is shared is this perception of innovation as being ‘out there’, which is part of our problem.

If we view innovation as something needed to survive — to change or to keep things as they are– then we need to shift the thinking from innovation being something novel, technology-dependent, and fitting the fetishistic perspective that dominates corporate discourse. That requires an intentional, skillful approach to designing for change (and survival). It means:

From surviving to thriving

The interconnection of social, technological, and environmental systems has created an unprecedented level of complexity for human beings. This complexity means our ability to learn from the past is muddled, just as our ability to see and predict what’s coming is limited. The feedback cycles that we need to make decisions — including the choice to remain still — are getting shorter as a result and require new, different, and better data than we could rely on before.

Survival is necessary, but not very inspiring. Innovation can generate a means to invigorate an organization and provide renewed purpose for those working in them. By connecting what it is that you do, to what you want, to what is happening (now and in the near-present) on a regular basis and viewing your programs and services more like gardens than mechanical devices, we have the chance to design with and for complexity rather than compete against it.

For a brilliant example of this metaphor of the garden in creative work and complexity see Brian Eno’s talk with The Edge Foundation. Eno speaks about the need to get past this idea of seeing the entire whole and enjoying the creative space that takes place within the boundaries you can see. It’s not about simple reactivity, it’s a proactive, yet humble approach to designing things (in his case, music) that allows him to work with complexity, not against it. It allows him to thrive.

More on this later.

Working with complexity means designing your work for complexity. It means being like The Leopard (the book, but maybe the cat, too) and be willing to embrace change as a way of living, not just to survive, but to thrive.

Photos by Geran de Klerk on Unsplash , Adaivorukamuthan on Unsplash,
and Alexandre St-Louis on Unsplash

evaluationsocial innovation

Innovation Evaluation and the Burden of Proof

Innovating is about doing something new to produce value and this introduces substantial challenges for strategy and evaluation. Without knowing what you can expect, it’s hard to know what you have…or where you’re going.

Innovation is one of those terms that is used a lot without as much attention to what it means in specific terms. This lack of specificity might be fine for casual conversation, but the risks (and benefits) to innovation are great and require a level of detail that has a seriousness to it.

When you’re evaluating a new product or service, the risk of developing the wrong thing or leaving something out is substantial. False confidence in success could have detrimental effects on an expected return on investment. Lack of understanding of the effects of a product or service could lead to harm.

Evaluation and strategy in the context of innovation takes on new meaning because the implications are so great.

Scoping investments

The investment in innovation requires considerable tolerance for risk, which is why many countries provide incentives to support it. In the case of many community-serving organizations, innovation is often seen as imperative given the complexity of many of our social issues.

Although Research & Development (R & D) is commonly associated with innovation, often the ‘R’ part of this work is focused on the development of the product or service, not the evaluation. Evaluation focuses our attention on the product or service as it is used, even if that is in an early, restricted setting. Evaluation provides insight into the actual use characteristics, and can be useful in supporting program design, but only if there is investment in it.

While it is difficult enough to balance all of the demands and costs associated with innovating, investing in evaluation early on can yield substantial benefits later on as have been highlighted elsewhere. However, it is precisely because many of the benefits are later in the development cycle that it is so easy to put off investing in evaluation early.

Present thinking, future investment

Innovation is principally about the future. However, as illustrated in other places, it is also about present value as a byproduct of the process of creating new goods, services, and realizing ideas. By thinking this way, program developers have the opportunity to create additional value now by integrating evaluation more fully into the present offering.

Yet it’s in the future where the biggest payoff comes. Just as you might see the glamour and glory that comes with being an innovator like the image above, that kind of success is often based on a simple set of outcomes (e.g., sales, profit, etc..). Those kind of metrics are easy to celebrate, but often disguise what the real value of something is.

Whether it’s in the form of social benefits, additional discoveries, or the role of your product or service as a catalyst for other change, the focus on the end product loses the story told along the way. By allowing evaluation into the process early, the opportunity to tell that story differently and better makes for a more interesting — and beneficial read. That is an innovation story worth writing.

Photo by Austin Distel on Unsplash

design thinkingevaluation

Utility in Design-Driven Evaluation

Design-driven evaluation focuses attention on what organization’s use in their decision-making and learning to support the creation of their products, delivery of services, and the overall quality of their work. While ultimately practical, this focus on utility also introduces some difficult conversations about what organizations truly value, not just what they say.

To say something is design-driven is to imply that its emphasis is on the process of creating something of value for someone. In this case, the value is through evaluation and what it means. Value in this case is framed as utility asking: what is it for? (and for whom and under what context)?

These are the questions that designers ask of any service, product, or structure that they seek to apply their attention to. Designers might ask: “what are hiring [the thing being designed] to do?” These are simple questions that may provoke much discussion and can transform the way we approach the creation and maintenance of something moving forward.

A different take is to ask people to describe what they already do (and what they want) to frame the discussion of how to approach the design. This can lead us into a trap of the present moment. It keeps people framing their work in the context of language that supports their present identity and the conceptions (and misconceptions) associated with it, not necessarily where they want to go.

Evidence-based?

You show me an evidence-based human service organization and I’ll show you one that is lying to you (and maybe to themselves), is in deep denial, or is focused on a narrow, established scope of practice. Very few fit the last category (but they do exist), which leaves us with the unsettling reality that we are likely dealing in some level of bullshit — a deliberate misrepresentation of the facts to impress others and themselves (see Harry Frankfurt’s work on the subject (PDF).

This is not to say that these organizations don’t use evidence at all or care about its application, but that there are so many areas within that scope of work that are not based on solid or even superficial evidence that to describe something as ‘evidence-based’ is an over-reach at best, a lie at worst. The reasons for this deception are many, but among them is simply that there is not enough evidence available to inform many aspects of the work. It’s impossible to truly be evidence-based when dealing with areas of complexity, social innovation, or complex innovation.

Consider this: an organization seeks to develop an evidence-based program and spends weeks or months gathering and reviewing research. They may even collect some data on their own and synthesize the findings together to inform a recommendation based on evidence. At play is the evidence for the program, the evidence to support the design of the program (converting evidence developed in one context into actionable structures, procedures, and plans into another), the evidence to support the implementation of the designed program in a new context, and the evidence generated through evaluation of the design, delivery, and outcomes associated with the program.

That is a lot of evidence to consider. I’ve never seen a program come even close to having evidence that even reasonably fits all of these contexts, let alone strong evidence. Why? Because there are so many variables at play in the program context (e.g., design, delivery, fidelity, etc..) and the process of evidence generation itself (e.g., design, data availability, analysis, etc..).

Utility means looking at what people actually use, not just what they say they use. To illustrate, I worked with an organization that proudly claimed that they were both evidence-based and a learning organization. When I asked what evidence they used and how they learned I was told with much more modest confidence that staff typically read “one or two” research articles per month (and that was it — and this was in a highly volatile, transnational, multidisciplinary field of practice). They also said that they engaged in reflective practice by writing up case reports, which (if completed at all), usually took up to four months to prepare after a site visit to a particular site or event due to the other activities they had to do as part of the day-to-day work of the organization.

This organization did their best, but that best wasn’t anywhere enough if they truly wished to be a learning organization or evidence-based. Yet, because they insisted they were these things they also insisted on an evaluation design that fit that narrative. They had not designed their organization to be evidence-based or a real learning organization. A design-driven approach would have developed things that suited that context and perhaps pushed them a little further toward being the organization they saw themselves to be.

Another Way: Fit for Purpose

Why bring up evidence-based decision-making? The reason has much to do with defining what makes design-driven evaluation different from other forms of evaluation (even research). Design-driven evaluation is about generating evidence for use within specific contexts. It involves using design principles and strategies to uncover and understand those principles ahead of the evaluation being designed. It means designing not only the evaluation itself, but the manner in which it produces products and the means associated with decision-making based on those products.

It is about being fit-for-purpose.

Most published forms of evidence is developed independent of the context in which it is to be used. That’s the traditional model for science. We learn things in one setting (maybe a lab) and then move it out into other settings (e.g., a clinic), do more trials and then eventually develop a body of evidence that is used to generalize to other settings. This works reasonably well for problems and issues that are simple or complicated in their structure.

As a situation involves ever greater complexity, that ability to translate from one setting or context to another breaks down. This complexity might also influence what the purpose and expected outcomes are of a program within that context. For example, a community-based health promotion program may have a theory, even program logic model, and goals, but it will need to consider neighbourhood design, differences in resident needs, local history, and the availability of other programs and resources. The purpose in one neighbourhood might be to provide a backstop to a local organization that is having financial problems where in another neighbourhood it might be to provide a vehicle for local leaders to take action where there are no other alternatives.

Not Leaving Things to Chance

Developing a fit-for-purpose program is not something that should be left to chance, because chances are very likely it won’t happen. If good design improves the use, usability, and overall translation of knowledge. A look at how real evidence-based practice emerges comes down to ways in which the design — intended or not — of the knowledge, the exchange opportunities, relationships, and systems come together.

Design-driven evaluation seeks to remedy one of the fundamental problems within the evidence translation process: the poor fit of the evaluation (data, process, focus) for the implementation of its findings. It’s about not leaving it to chance with the hope that maybe someone will figure out how to use things, overcome poor usability, persist through confusion, and still make good use of an evaluation.

Is this the system we want? Or could we do better? My answer is ‘no’ to the first and ‘yes’ to the second. Design-driven evaluations can be the means to get us to that ‘yes’ because as things get more complicated and complex and the need for better data, improved decisions, and decisive action rises we need to make sure we don’t leave doing better to chance.

Photo by Jeff Sheldon on Unsplash

If you’re interested in better and doing design-driven evaluation, contact Cense via this link.

design thinking

Leadership & Design Thinking: Missed Opportunities

A recent article titled ‘The Right Way to Lead Design Thinking’ gets a lot of things wrong not because of what it says, but because of the way it says it. If we are to see better outcomes from what we create we need to begin with talking about design and design thinking differently.

I cringed when I first saw it in my LinkedIn feed. There it was: The Right Way to Lead Design Thinking. I tend to bristle when I see broad-based claims about the ‘right’ or ‘wrong’ way to do something, particularly with something so scientifically bereft as design thinking. Like others, I’ve called out much of what is discussed as design thinking for what I see as simple bullshit.

To my (pleasant) surprise, this article was based on data, not just opinion, which already puts it in a different class than most other articles on design thinking, but that doesn’t earn it a free pass. In some fairness to the authors, the title may not be theirs (it could be an editor’s choice), but what comes afterward still bears some discussion less about what they say, but how they say it and what they don’t say. This post reflects some thoughts on this work.

How we talk about what we do shapes what we know and the questions we ask and design thinking is at a state where we need to be asking bigger and better questions of it.

Right and Wrong

The most glaring critique I have of the article is the aforementioned title for many reasons. Firstly, the term ‘right’ assumes that we know above all how to do something. We could claim this if we had a body of work that systematically evaluated the outcomes associated with leadership and design thinking or research examining the process of doing design thinking. The issue is: we don’t.

There isn’t a definition of design thinking that can be held up for scrutiny to test or evaluate so how can we claim the ‘right’ way to do it? The authors link to a 2008 HBR article by Tim Brown that outlines design thinking as its reference source, however, that article provides scant concrete direction for measurement or evaluation, rather it emphasizes thinking and personality approaches to addressing design problems and a three-factor process model of how it is done in practice. These might be useful as tools, but they are not something you can derive indicators (quantitative or qualitative) to inform a comparison.

The other citation is a 2015 HBR article from Jon Kolko. Kolko is one of design’s most prolific scholars and one of the few who actively and critically writes about the thinking, doing, craft, teaching, and impact of design on the people, places, and systems around us. While his HBR article is useful in painting the complexity that besets the challenge of designers doing ‘design thinking’, it provides little to go from in developing the kind of comparative metrics that can inform a statement to say something is ‘right’ or ‘wrong’. It’s not fit for that purpose (and I suspect was never designed for that in the first place).

Both of these reference sources are useful for those looking to understand a little about what design thinking might be and how it could be used and few are more qualified to speak on such things as Tim Brown and Jon Kolko. But if we are to start taking design thinking seriously, we need to go beyond describing what it is and show what it does (and doesn’t do) and under what conditions. This is what serves as the foundation for a real science of practice.

The authors do provide a description of design thinking later in the article and anchors that description in the language of empathy, something that has its own problems.

Designers seek a deep understanding of users’ conditions, situations, and needs by endeavoring to see the world through their eyes and capture the essence of their experiences. The focus is on achieving connection, even intimacy, with users.

False Empathy?

Connecting to ideas and people

It’s fair to say that Apple and the Ford Motor Company have created a lot of products that people love (and hate) and rely on every day. They also weren’t always what people asked for. Many of those products were not designed for where people were, but they did shape where they went afterward. Empathizing with their market might not have produced the kind of breakthroughs like the iPod or automobile.

Empathy is a poor end in itself and the language used in this article treats it as such. Seeing the world through others’ eyes helps you gain perspective, maybe intimacy, but that’s all it does. Unless you are willing to take this into a systems perspective and recognize that many of our experiences are shared, collective, connected, and also disconnected then you only get one small part of the story. There is a risk that we over-emphasize the role that empathy plays in design. We can still achieve remarkable outcomes that create enormous benefit without being empathic although I think most people would agree that’s not the way we would prefer it. We risk confusing the means and ends.

One of the examples of how empathy is used in design thinking leadership takes place at a Danish hospital heart clinic where the leaders asked: “What if the patient’s time were viewed as more important than the doctor’s?” Asking this question upended the way that many health professionals saw the patient journey and led to improvements to a reduction in overnight stays. My question is: what did this produce?

What did this mean for the healthcare system as a whole? How about the professionals themselves? Are patients healthier because of the more efficient service they received? Who is deriving the benefits of this decision and who is bearing the risk and cost? What do we get from being empathic?

Failure Failings

Failure is among the most problematic of the words used in this article. Like empathy, failure is a commonly used term within popular writing on innovation and design thinking. The critique of this term in the article is less about how the authors use it explicitly, but that it is used at all. This may be as much a matter of the data itself (i.e., if you participants speak of it, therefore it is included in the dataset), however, its profile in the article is what is worth noting.

The issue is a framing problem. As the authors report from their research: “Design-thinking approaches call on employees to repeatedly experience failure”. Failure is a binary concept, which is not useful when dealing with complexity — something that Jon Kolko writes about in his article. If much of what we deal with in designing for human systems is about complexity, why are we anchoring our discussion to binary concepts such as ‘success’ and ‘failure’?

Failure exists only when we know what success looks like. If we are really being innovative, reframing the situation, getting to know our users (and discarding our preconceptions about them), how is it that we can fail? I have argued that the only thing we can steadfastly fail at in these conditions is learning. We can fail to build in mechanisms for data gathering, sensemaking, sharing, and reflecting that are associated with learning, but otherwise what we learn is valuable.

Reframing Our Models

The very fact that this article is in the Harvard Business Review suggests much about the intended audiences for this piece. I am sympathetic to the authors and my critique has focused on the details within the expression of the work, not necessarily the intent or capacity of those that created it. However, choices have consequences attached and the outcome of this article is that the framing of design thinking is in generating business improvements. Those are worthy goals, but not the only ones possible.

One of the reasons concepts like ‘failure’ apply to so much of the business literature is that the outcomes are framed in binary or simple terms. It is about improvement, efficiency, profit, and productivity. Business outcomes might also include customer satisfaction, purchase actions, or brand recognition. All of these benefit the company, not necessarily the customer, client, patient, person, or citizen.

If we were truly tackling human-centred problems, we might approach them differently and ask different questions. Terms like failure actually do apply within the business context, not because they support innovation per se, but because the outcomes are pre-set.

Leadership Roles

Bason and Austin’s research is not without merit for many reasons. Firstly, it is evidence-based. They have done the work by interviewing, synthesizing, commenting on, and publishing the research. That in itself makes it a worthy contribution to the field.

It also provides commentary and insight on some practical areas of design leadership that readers can take away right away by highlighting roles for leaders.

One of these roles is in managing the tension between divergent and convergent thought and development processes in design work. This includes managing the insecurities that many design teams may express in dealing with the design process and the volume of dis-organized content it can generate.

The exemplary leaders we observed ensured that their design-thinking project teams made the space and time for diverse new ideas to emerge and also maintained an overall sense of direction and purpose. 

Bason & Austin, HBR 2019

Another key role of the design leader is to support future thinking. By encouraging design teams to explore and test their work in the context of what could be, not just what is, leaders reframe the goals of the work and the outcomes in ways that support creativity.

Lastly, a key strength of the piece was the encouragement of multi-media forms of engagement and feedback. The authors chose to illustrate how leaders supported their teams in thinking differently about not only the design process but the products for communicating that process (and resulting products) to each other and the outside world. Too often the work of design is lost in translation because the means of communication have not been designed for the outcomes that are needed — something akin to design-driven evaluation.

Language, Learning, Outcomes

By improving how we talk about what we do we are better at framing how to ask questions about what we do and what impact it has. Doing the right thing means knowing what the wrong this is. Without evaluation, we run the risk in Design of doing what Russell Ackoff cautioned against: Doing the wrong things righter.

A read between the lines of the data — the stories and examples — that were presented in the article by Bason and Austin is the role of managing fear — fear of ‘failure’, fear from confusion, fear of not doing good work. Design, if it is anything, is optimistic in that it is about making an effort to try and solve problems, taking action, and generating something that makes a difference. Design leadership is about supporting that work and bringing it into our organizations and making it accessible.

That is an outcome worth striving for. While there are missed opportunities here, there is also much to build on and lead from.

Lead Photo by Quino Al on Unsplash

Inset Photo by R Mo on Unsplash

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.