Category: design thinking

design thinkingevaluation

Utility in Design-Driven Evaluation

Design-driven evaluation focuses attention on what organization’s use in their decision-making and learning to support the creation of their products, delivery of services, and the overall quality of their work. While ultimately practical, this focus on utility also introduces some difficult conversations about what organizations truly value, not just what they say.

To say something is design-driven is to imply that its emphasis is on the process of creating something of value for someone. In this case, the value is through evaluation and what it means. Value in this case is framed as utility asking: what is it for? (and for whom and under what context)?

These are the questions that designers ask of any service, product, or structure that they seek to apply their attention to. Designers might ask: “what are hiring [the thing being designed] to do?” These are simple questions that may provoke much discussion and can transform the way we approach the creation and maintenance of something moving forward.

A different take is to ask people to describe what they already do (and what they want) to frame the discussion of how to approach the design. This can lead us into a trap of the present moment. It keeps people framing their work in the context of language that supports their present identity and the conceptions (and misconceptions) associated with it, not necessarily where they want to go.

Evidence-based?

You show me an evidence-based human service organization and I’ll show you one that is lying to you (and maybe to themselves), is in deep denial, or is focused on a narrow, established scope of practice. Very few fit the last category (but they do exist), which leaves us with the unsettling reality that we are likely dealing in some level of bullshit — a deliberate misrepresentation of the facts to impress others and themselves (see Harry Frankfurt’s work on the subject (PDF).

This is not to say that these organizations don’t use evidence at all or care about its application, but that there are so many areas within that scope of work that are not based on solid or even superficial evidence that to describe something as ‘evidence-based’ is an over-reach at best, a lie at worst. The reasons for this deception are many, but among them is simply that there is not enough evidence available to inform many aspects of the work. It’s impossible to truly be evidence-based when dealing with areas of complexity, social innovation, or complex innovation.

Consider this: an organization seeks to develop an evidence-based program and spends weeks or months gathering and reviewing research. They may even collect some data on their own and synthesize the findings together to inform a recommendation based on evidence. At play is the evidence for the program, the evidence to support the design of the program (converting evidence developed in one context into actionable structures, procedures, and plans into another), the evidence to support the implementation of the designed program in a new context, and the evidence generated through evaluation of the design, delivery, and outcomes associated with the program.

That is a lot of evidence to consider. I’ve never seen a program come even close to having evidence that even reasonably fits all of these contexts, let alone strong evidence. Why? Because there are so many variables at play in the program context (e.g., design, delivery, fidelity, etc..) and the process of evidence generation itself (e.g., design, data availability, analysis, etc..).

Utility means looking at what people actually use, not just what they say they use. To illustrate, I worked with an organization that proudly claimed that they were both evidence-based and a learning organization. When I asked what evidence they used and how they learned I was told with much more modest confidence that staff typically read “one or two” research articles per month (and that was it — and this was in a highly volatile, transnational, multidisciplinary field of practice). They also said that they engaged in reflective practice by writing up case reports, which (if completed at all), usually took up to four months to prepare after a site visit to a particular site or event due to the other activities they had to do as part of the day-to-day work of the organization.

This organization did their best, but that best wasn’t anywhere enough if they truly wished to be a learning organization or evidence-based. Yet, because they insisted they were these things they also insisted on an evaluation design that fit that narrative. They had not designed their organization to be evidence-based or a real learning organization. A design-driven approach would have developed things that suited that context and perhaps pushed them a little further toward being the organization they saw themselves to be.

Another Way: Fit for Purpose

Why bring up evidence-based decision-making? The reason has much to do with defining what makes design-driven evaluation different from other forms of evaluation (even research). Design-driven evaluation is about generating evidence for use within specific contexts. It involves using design principles and strategies to uncover and understand those principles ahead of the evaluation being designed. It means designing not only the evaluation itself, but the manner in which it produces products and the means associated with decision-making based on those products.

It is about being fit-for-purpose.

Most published forms of evidence is developed independent of the context in which it is to be used. That’s the traditional model for science. We learn things in one setting (maybe a lab) and then move it out into other settings (e.g., a clinic), do more trials and then eventually develop a body of evidence that is used to generalize to other settings. This works reasonably well for problems and issues that are simple or complicated in their structure.

As a situation involves ever greater complexity, that ability to translate from one setting or context to another breaks down. This complexity might also influence what the purpose and expected outcomes are of a program within that context. For example, a community-based health promotion program may have a theory, even program logic model, and goals, but it will need to consider neighbourhood design, differences in resident needs, local history, and the availability of other programs and resources. The purpose in one neighbourhood might be to provide a backstop to a local organization that is having financial problems where in another neighbourhood it might be to provide a vehicle for local leaders to take action where there are no other alternatives.

Not Leaving Things to Chance

Developing a fit-for-purpose program is not something that should be left to chance, because chances are very likely it won’t happen. If good design improves the use, usability, and overall translation of knowledge. A look at how real evidence-based practice emerges comes down to ways in which the design — intended or not — of the knowledge, the exchange opportunities, relationships, and systems come together.

Design-driven evaluation seeks to remedy one of the fundamental problems within the evidence translation process: the poor fit of the evaluation (data, process, focus) for the implementation of its findings. It’s about not leaving it to chance with the hope that maybe someone will figure out how to use things, overcome poor usability, persist through confusion, and still make good use of an evaluation.

Is this the system we want? Or could we do better? My answer is ‘no’ to the first and ‘yes’ to the second. Design-driven evaluations can be the means to get us to that ‘yes’ because as things get more complicated and complex and the need for better data, improved decisions, and decisive action rises we need to make sure we don’t leave doing better to chance.

Photo by Jeff Sheldon on Unsplash

If you’re interested in better and doing design-driven evaluation, contact Cense via this link.

design thinking

Leadership & Design Thinking: Missed Opportunities

A recent article titled ‘The Right Way to Lead Design Thinking’ gets a lot of things wrong not because of what it says, but because of the way it says it. If we are to see better outcomes from what we create we need to begin with talking about design and design thinking differently.

I cringed when I first saw it in my LinkedIn feed. There it was: The Right Way to Lead Design Thinking. I tend to bristle when I see broad-based claims about the ‘right’ or ‘wrong’ way to do something, particularly with something so scientifically bereft as design thinking. Like others, I’ve called out much of what is discussed as design thinking for what I see as simple bullshit.

To my (pleasant) surprise, this article was based on data, not just opinion, which already puts it in a different class than most other articles on design thinking, but that doesn’t earn it a free pass. In some fairness to the authors, the title may not be theirs (it could be an editor’s choice), but what comes afterward still bears some discussion less about what they say, but how they say it and what they don’t say. This post reflects some thoughts on this work.

How we talk about what we do shapes what we know and the questions we ask and design thinking is at a state where we need to be asking bigger and better questions of it.

Right and Wrong

The most glaring critique I have of the article is the aforementioned title for many reasons. Firstly, the term ‘right’ assumes that we know above all how to do something. We could claim this if we had a body of work that systematically evaluated the outcomes associated with leadership and design thinking or research examining the process of doing design thinking. The issue is: we don’t.

There isn’t a definition of design thinking that can be held up for scrutiny to test or evaluate so how can we claim the ‘right’ way to do it? The authors link to a 2008 HBR article by Tim Brown that outlines design thinking as its reference source, however, that article provides scant concrete direction for measurement or evaluation, rather it emphasizes thinking and personality approaches to addressing design problems and a three-factor process model of how it is done in practice. These might be useful as tools, but they are not something you can derive indicators (quantitative or qualitative) to inform a comparison.

The other citation is a 2015 HBR article from Jon Kolko. Kolko is one of design’s most prolific scholars and one of the few who actively and critically writes about the thinking, doing, craft, teaching, and impact of design on the people, places, and systems around us. While his HBR article is useful in painting the complexity that besets the challenge of designers doing ‘design thinking’, it provides little to go from in developing the kind of comparative metrics that can inform a statement to say something is ‘right’ or ‘wrong’. It’s not fit for that purpose (and I suspect was never designed for that in the first place).

Both of these reference sources are useful for those looking to understand a little about what design thinking might be and how it could be used and few are more qualified to speak on such things as Tim Brown and Jon Kolko. But if we are to start taking design thinking seriously, we need to go beyond describing what it is and show what it does (and doesn’t do) and under what conditions. This is what serves as the foundation for a real science of practice.

The authors do provide a description of design thinking later in the article and anchors that description in the language of empathy, something that has its own problems.

Designers seek a deep understanding of users’ conditions, situations, and needs by endeavoring to see the world through their eyes and capture the essence of their experiences. The focus is on achieving connection, even intimacy, with users.

False Empathy?

Connecting to ideas and people

It’s fair to say that Apple and the Ford Motor Company have created a lot of products that people love (and hate) and rely on every day. They also weren’t always what people asked for. Many of those products were not designed for where people were, but they did shape where they went afterward. Empathizing with their market might not have produced the kind of breakthroughs like the iPod or automobile.

Empathy is a poor end in itself and the language used in this article treats it as such. Seeing the world through others’ eyes helps you gain perspective, maybe intimacy, but that’s all it does. Unless you are willing to take this into a systems perspective and recognize that many of our experiences are shared, collective, connected, and also disconnected then you only get one small part of the story. There is a risk that we over-emphasize the role that empathy plays in design. We can still achieve remarkable outcomes that create enormous benefit without being empathic although I think most people would agree that’s not the way we would prefer it. We risk confusing the means and ends.

One of the examples of how empathy is used in design thinking leadership takes place at a Danish hospital heart clinic where the leaders asked: “What if the patient’s time were viewed as more important than the doctor’s?” Asking this question upended the way that many health professionals saw the patient journey and led to improvements to a reduction in overnight stays. My question is: what did this produce?

What did this mean for the healthcare system as a whole? How about the professionals themselves? Are patients healthier because of the more efficient service they received? Who is deriving the benefits of this decision and who is bearing the risk and cost? What do we get from being empathic?

Failure Failings

Failure is among the most problematic of the words used in this article. Like empathy, failure is a commonly used term within popular writing on innovation and design thinking. The critique of this term in the article is less about how the authors use it explicitly, but that it is used at all. This may be as much a matter of the data itself (i.e., if you participants speak of it, therefore it is included in the dataset), however, its profile in the article is what is worth noting.

The issue is a framing problem. As the authors report from their research: “Design-thinking approaches call on employees to repeatedly experience failure”. Failure is a binary concept, which is not useful when dealing with complexity — something that Jon Kolko writes about in his article. If much of what we deal with in designing for human systems is about complexity, why are we anchoring our discussion to binary concepts such as ‘success’ and ‘failure’?

Failure exists only when we know what success looks like. If we are really being innovative, reframing the situation, getting to know our users (and discarding our preconceptions about them), how is it that we can fail? I have argued that the only thing we can steadfastly fail at in these conditions is learning. We can fail to build in mechanisms for data gathering, sensemaking, sharing, and reflecting that are associated with learning, but otherwise what we learn is valuable.

Reframing Our Models

The very fact that this article is in the Harvard Business Review suggests much about the intended audiences for this piece. I am sympathetic to the authors and my critique has focused on the details within the expression of the work, not necessarily the intent or capacity of those that created it. However, choices have consequences attached and the outcome of this article is that the framing of design thinking is in generating business improvements. Those are worthy goals, but not the only ones possible.

One of the reasons concepts like ‘failure’ apply to so much of the business literature is that the outcomes are framed in binary or simple terms. It is about improvement, efficiency, profit, and productivity. Business outcomes might also include customer satisfaction, purchase actions, or brand recognition. All of these benefit the company, not necessarily the customer, client, patient, person, or citizen.

If we were truly tackling human-centred problems, we might approach them differently and ask different questions. Terms like failure actually do apply within the business context, not because they support innovation per se, but because the outcomes are pre-set.

Leadership Roles

Bason and Austin’s research is not without merit for many reasons. Firstly, it is evidence-based. They have done the work by interviewing, synthesizing, commenting on, and publishing the research. That in itself makes it a worthy contribution to the field.

It also provides commentary and insight on some practical areas of design leadership that readers can take away right away by highlighting roles for leaders.

One of these roles is in managing the tension between divergent and convergent thought and development processes in design work. This includes managing the insecurities that many design teams may express in dealing with the design process and the volume of dis-organized content it can generate.

The exemplary leaders we observed ensured that their design-thinking project teams made the space and time for diverse new ideas to emerge and also maintained an overall sense of direction and purpose. 

Bason & Austin, HBR 2019

Another key role of the design leader is to support future thinking. By encouraging design teams to explore and test their work in the context of what could be, not just what is, leaders reframe the goals of the work and the outcomes in ways that support creativity.

Lastly, a key strength of the piece was the encouragement of multi-media forms of engagement and feedback. The authors chose to illustrate how leaders supported their teams in thinking differently about not only the design process but the products for communicating that process (and resulting products) to each other and the outside world. Too often the work of design is lost in translation because the means of communication have not been designed for the outcomes that are needed — something akin to design-driven evaluation.

Language, Learning, Outcomes

By improving how we talk about what we do we are better at framing how to ask questions about what we do and what impact it has. Doing the right thing means knowing what the wrong this is. Without evaluation, we run the risk in Design of doing what Russell Ackoff cautioned against: Doing the wrong things righter.

A read between the lines of the data — the stories and examples — that were presented in the article by Bason and Austin is the role of managing fear — fear of ‘failure’, fear from confusion, fear of not doing good work. Design, if it is anything, is optimistic in that it is about making an effort to try and solve problems, taking action, and generating something that makes a difference. Design leadership is about supporting that work and bringing it into our organizations and making it accessible.

That is an outcome worth striving for. While there are missed opportunities here, there is also much to build on and lead from.

Lead Photo by Quino Al on Unsplash

Inset Photo by R Mo on Unsplash

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

public healthstrategic foresight

Futuring the Past

Flat Earth to Measles: Did We See That Coming?
In the first month of 2019 the United States saw more measles cases than it did in all of 2010. This disease of the past was once on its way to extinction (or deep hibernation) is now a current public health threat, which prompts us to think: how can our futuring better consider what we came from not just what it might lead to?

Measles was something that my parents worried about for me and my brothers more than forty years ago. Measles is one of those diseases that causes enormous problems that are both obvious and also difficult to see until they manifest themselves down the road. Encephalitis and diarrhea are two possible short-term effects, while a compromised immune system down the road is some of the longer-term effects. It’s a horrible condition, one of the most infectious diseases we know of, and also one that was once considered to be ‘eliminated’ from the United States,Canada and most of the Americas (which means existing in such small numbers as not worthy of large-scale monitoring).

In the first month of 2019 there have been more measles cases tracked than all of 2010. The causes of this are many, but largely attributable to a change in vaccination rates among the public. The fewer people who get vaccinated, the more likely the disease will find a way to take hold in the population — first of those who aren’t protected, but over time this will include some of those who are because of the ‘herd protection’ nature of how vaccination works.

Did We See That Coming?

Measles hasn’t featured prominently in any of the foresight models of the health system that I’ve seen over the course of my career. Then again, twenty-five years ago, it would have been unlikely that any foresight model of urban planning would have emphasized scooters or bicycles — old technologies — over the automobile as modes of transportation likely to shape our cities. Yet, here we are.

Today, those interested in the future of transportation are focused on autonomous cars, yet there is some speculation that the car — or at least the one we know now — will disappear altogether. Manufacturers like Ford — the company that invented the mass-market automobile — have already decided they will abandon most of their automobile production in the next few years.

The hottest TV show (or rather, streamed media production) among those under the age of 20? Friends (circa 1994).

Are you seeing a trend here?

What we are seeing is a resurgence of the past in pockets all throughout our society. The implications of this are many for those who develop or rely on futurist-oriented models to shape their work.

One might argue that a good model of the future always assumes this and therefore it isn’t a flaw of the model, but rather that, as William Gibson was quoted as suggesting: the future is here, it’s just not evenly distributed yet. The Three Horizon Framework popularized by McKinsey has this assumption built into it from the beginning. But it’s not just the model that might be problematic, but the thinking behind it.

Self-Fulfilling Futures

Foresight is useful for a number of things, but I would argue very little of that benefit is what many futurists claim. The arguments for investing in foresight is that, by thinking about what the future could bring we can better prepare ourselves for that reality in our organizations. This might mean identifying different product lines, keeping an eye out for trends that match our predictions, improving our innovation systems and “the impact of decision-making“.

Why is the case? The answer — as I’ve been told by foresight and futurist colleagues — is that by seeing what is coming we can prepare for it, much like a weather forecast allows us to dress appropriately for the day to account for the possibility of rain or snow.

The critique I have with this line of thinking is that: do we ever go back and see where our models fit and didn’t fit? Are foresight models open to evaluation? I would argue: no. There is no systematic evaluation of foresight initiatives. This is not to suggest that evaluation needs to concern itself with whether a model gets everything right — that the future turns out just as we anticipated — but whether it was actually useful.

Did we make a better decision because we saw a possible future? Did we restructure our organization to achieve something that would have been impossible had we not had the strategic foresight to guide us? These are the claims and yet we do not have evidence to support it. Such little evaluation of these models has left us open to clinging to myths and also to an absence of critical reflection on what use these models have (and also a wasted opportunity to consider what use they could have).

Yes, the case of Royal Dutch Shell and its ability to envision problems with the global oil supply chain in the late 1960’s and early 70’s through adopting a foresight approach gave them a step up on their competitors. But how many other cases of this nature are there? Where is the evidence that this approach does what it’s proponents claim it does? With foresight being adopted across industries we should have many examples of its impact, but we do not.

Layering Influence and Impact

Let’s bring it back to public health. There is enormous evidence to point to the role of tobacco use and lifetime prevalence of a litany of health problems like cancer and cardiovascular disease, yet there are still millions who use tobacco daily. Lack of retirement savings is a clear pathway to significant problems for health, wellbeing, and lifestyle down the road. The effects of human behaviour on the environment and our health have been known for decades (or millennia, depending on your perspective) to the point where we are now referring to this stage of planetary evolution as the Anthropocene (the age where humans influence the planet).

We can see things coming in various degrees of focus and yet the influence on our behaviour is not certain. Indeed, the anticipation of future consequences is only one element of a large array of factors that influence our behaviour. Psychologists, the group that studies and support the evidence for behaviour change, have shown that we are actually pretty bad at predicting what will happen, how we will react to something, and what will influence change.

Many of these factors are systemic — that is tied to the systems we are a part of. This is our team, family, organization, community, and society, and time — the various spheres outlined in Bronfenbrenner’s Social-Ecological Model. This model outlines the various ‘rings’ or spheres that influence us, including time (which encompasses them all). It’s this last ring that we often forget. This model can be useful because it showcases layers of impact and influence, including from our past.

Decision Making in the Past

By anchoring ourselves to the future and not considering our past, our models for prediction, forecasting, and foresighting are limited. We are equally limited when we use the same form of thinking (about the future) to make our models about the past. In this case, I think of Andrew Yang who recently spoke to the Freakonomics podcast and pointed out how our economic thinking is rooted in past models that we would never accept today. He’s wrong — sort of. We do accept this and it is alive and well in many of our ways of thinking about the future.

In speaking about how we’ve been through economic patterns of disruption, he points out that we are using an old fact pattern to inform what we do now as if the economy — which we invented just a few hundred years ago — has these immutable laws.


The fantasists — and they are so lazy and it makes me so angry, because people who are otherwise educated literally wave their hands and are like, “Industrial Revolution, 120 years ago. Been through it before,” and, man, if someone came into your office and pitched you an investment in a company based on a fact pattern from 120 years ago, you’d freakin’ throw them out of your office so fast.

Andrew Yang, speaking on Freakonomics

Foresight would benefit from the same kind of critical examination of itself as Yang does with the economy and our ways of thinking about it. That critical examination includes using real evidence to make decisions where we have it and where we don’t have it – we establish it.

Maybe then, we might anticipate that measles are not gone. Let’s keep our eye out on polio, too. And as for a flat earth? Don’t sail too far into the sunset as you might fall off if we don’t factor that into our models of the future.

Image Credit: “Flat Earth | Conspiracy Theory VOL.1” by Daniel Beintner is licensed under CC BY-NC-ND 4.0. To view a copy of this license, visit: https://creativecommons.org/licenses/by-nc-nd/4.0

behaviour changebusinessdesign thinking

How do we sit with time?

IMG_1114.jpg

Organizational transformation efforts from culture change to developmental evaluation all depend on one ingredient that is rarely discussed: time. How do we sit with this and avoid the trap of aspiring for greatness while failing to give it the time necessary to make change a reality? 

Toolkits are a big hit with those looking to create change. In my years of work with organizations large and small supporting behaviour change, innovation, and community development there are few terms that light up people’s faces than hearing “toolkit”. Usually, that term is mentioned by someone other than me, but it doesn’t stop the palpable excitement at the prospect of having a set of tools that will solve a complex problem.

Toolkits work with simple problems. A hammer works well with nails. Drills are good at making holes. With enough tools and some expertise, you can build a house. Organizational development or social change is a complex challenge where tools don’t have the same the same linear effect. A tool — a facilitation technique, an assessment instrument, a visualization method — can support change-making, but the application and potential outcome of these tools will always be contextual.

Tools and time

My experience has been that people will go to great extents to acquire tools yet put little comparative effort to use them.  A body of psychological research has shown there are differences between goals, the implementation intentions behind them, and actual achievement of those goals. In other words: desiring change, planning and intending to make a change, and actually doing something are different.

Tools are proxies for this issue in many ways: having tools doesn’t mean they either get used or that they actually produce change. Anyone in the fitness industry knows that the numbers between those who try a workout, those who buy a membership to a club, and those who regularly show up to workout are quite different.

Or consider the Japanese term Tsundoku, which loosely translates into the act of acquiring reading materials and letting them pile up in one’s home without reading them.

But tools are stand-ins for something far more important and powerful: time.

The pursuit of tools and their use is often hampered because organizations do not invest in the time to learn, appropriately apply, refine, and sense-make the products that come through these tools.

A (false) artifact of progress

Bookshelf

Consider the book buying or borrowing example above: we calculate the cost of the book when really we ought to price out the time required to read it. Or, in the case of practical non-fiction, the cost to read it and apply the lessons from it.

Yet, consider a shelf filled with books before you providing the appearance of having the knowledge contained within despite any evidence that its contents have been read. This is the same issue with tools: once acquired it’s easier to assume the work is largely done. I’ve seen this firsthand with people doing what the Buddhist phrase decries:

“Do not confuse the finger pointing to the moon for the moon itself”

It’s the same confusion we see between having data or models and the reality they represent.

These things all represent artifacts of progress and a false equation. More books or data or better models do not equal more knowledge. But showing that you have more of something tangible is a seductive proxy. Time has no proxy; that’s the biggest problem.

Time just disappears, is spent, is used, or whatever metaphor you choose to use to express time. Time is about Kairos or Chronos, the sequence of moments or the moments themselves, but in either case, they bear no clear markers.

Creating time markers

There are some simple tricks to create the same accumulation effect in time-focused work — tools often used to support developmental evaluation and design. Innovation is as much about the process as it is the outcome when it comes to marking effort. The temptation is to focus on the products — the innovations themselves — and lose what was generated to get there. Here are some ways to change that.

  1. Timelines. Creating live (regular) recordings of what key activities are being engaged and connecting them together in a timeline is one way to show the journey from idea to innovation. It also provides a sober reminder of the effort and time required to go through the various design cycles toward generating a viable prototype.
  2. Evolutionary Staging. Document the prototypes created through photographs, video, or even showcasing versions (in the case of a service or policy where the visual element isn’t as prominent). This is akin to the March of Progress image used to show human evolution. By capturing these things and noting the time and timing of what is generated, you create an artifact that shows the time that was invested and what was produced from that investment. It’s a way to honour the effort put toward innovation.
  3. Quotas & Time Targets. I’m usually reluctant to prescribe a specific amount of time one should spend on reflection and innovation-related sensemaking, but it’s evident from the literature that goals, targets, and quotas work as effective motivators for some people. If you generate a realistic set of targets for thoughtful work, this can be something to aspire to and use to drive activity. By tracking the time invested in sensemaking, reflection, and design you better can account for what was done, but also create the marker that you can point to that makes time seem more tangible.

These are three ways to make time visible although it’s important to remember that the purpose isn’t to just accumulate time but to actually sit with it.

All the tricks and tools won’t bring the benefit of what time can offer to an organization willing to invest in it, mindfully. Except, perhaps, a clock.

Try these out with some simple tasks. Another is to treat time like any other resource: budget it. Set aside the time in a calendar by booking key reflective activities in just as you would anything else. To do this, and to keep to it, requires leadership and the organizational supports necessary to ensure that learning can take place. Consider what is keeping you from taking or making the time to learn, share those thoughts with your peers, and then consider how you might re-design what you do and how you do it to support that learning.

Take time for that, and you’re on your way to something better.

 

If you’re interested in learning more about how to do this practically, using data, and designing the conditions to support innovation, contact me. This is the kind of stuff that I do. 

 

 

 

 

 

design thinkingpsychologyresearch

Elevating Design & Design Thinking

 

ElevateDesignLoft2.jpgDesign thinking has brought the language of design into popular discourse across different fields, but it’s failings threaten to undermine the benefits it brings if they aren’t addressed. In this third post in a series, we look at how Design (and Design Thinking) can elevate themselves above their failings and match the hype with real impact. 

In two previous posts, I called out ‘design thinkers’ to get the practice out of it’s ‘bullshit’ phase, characterized by high levels of enthusiastic banter, hype, and promotion and low evidence, evaluation or systematic practice.

Despite the criticism, it’s time for Design Thinking (and the field of Design more specifically) to be elevated beyond its current station. I’ve been critical of Design Thinking for years: its popularity has been helpful in some ways, problematic in others.  Others have, too. Bill Storage, writing in 2012 (now unavailable), said:

Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design.

Bruce Nussbaum, who helped popularize Design Thinking in the early 2000’s called it a ‘failed experiment’, seeking to promote the concept of Creative Intelligence instead. While many have called for Design Thinking to die, it’s not going to happen anytime soon. Since first publishing a piece on Design Thinking’s problems five years ago the practice has only grown. Design Thinking is going to continue to grow, despite its failings and that’s why it matters that we pay attention to it — and seek to make it better.

Lack of quality control, standardization or documentation of methods, and evidence of impact are among the biggest problems facing Design Thinking if it is to achieve anything substantive beyond generating money for those promoting it.

Giving design away, better

It’s hard to imagine that the concepts of personality, psychosis, motivation, and performance measurement from psychology were once unknown to most people. Yet, before the 1980’s, much of the public’s understanding of psychology was confined to largely distorted beliefs about Freudian psychoanalysis, mental illness, and rat mazes. Psychology is now firmly ensconced in business, education, marketing, public policy, and many other professions and fields. Daniel Kahneman, a psychologist, won the Nobel Prize in Economics in 2002 for his work applying psychological and cognitive science to economic decision making.

The reason for this has much to do with George Miller who, as President of the American Psychological Association, used his position to advocate that professional psychology ‘give away’ its knowledge to ensure its benefits were more widespread. This included creating better means of communicating psychological concepts to non-psychologists and generating the kind of evidence that could show its benefits.

Design Thinking is at a stage where we are seeing similar broad adoption beyond professional design to these same fields of business, education, the military and beyond. While there has been much debate about whether design thinking as practiced by non-designers (like MBA’s) is good for the field as a whole, there is little debate that its become popular just as psychology has.

What psychology did poorly is that it gave so much away that it failed to engage other disciplines enough to support quality adoption and promotion and, simultaneously, managed to weaken itself as newfound enthusiasts pursued training in these other disciplines. Now, some of the best psychological practice is done by social workers and the most relevant research comes from areas like organizational science and new ‘sub-disciplines’ like behavioural economics, for example.

Design Thinking is already being taught, promoted, and practiced by non-designers. What these non-designers often lack is the ‘crit’ and craft of design to elevate their designs. And what Design lacks is the evaluation, evidence, and transparency to elevate its work beyond itself.

So what next?

BalloonsEgypt.jpg

Elevating Design

As Design moves beyond its traditional realms of products and structures to services and systems (enabled partly by Design Thinking’s popularity) the implications are enormous — as are the dangers. Poorly thought-through designs have the potential to exacerbate problems rather than solve them.

Charles Eames knew this. He argued that innovation (which is what design is all about) should be a last resort and that it is the quality of the connections (ideas, people, disciplines and more) we create that determine what we produce and their impact on the world. Eames and his wife Ray deserve credit for contributing to the elevation of the practice of design through their myriad creations and their steadfast documentation of their work. The Eames’ did not allow themselves to be confined by labels such as product designer, interior designer, or artist. They stretched their profession by applying craft, learning with others, and practicing what they preached in terms of interdisciplinarity.

It’s now time for another elevation moment. Designers can no longer be satisfied with client approval as the key criteria for success. Sustainability, social impact, and learning and adaptation through behaviour change are now criteria that many designers will need to embrace if they are to operate beyond the fields’ traditional domains (as we are now seeing more often). This requires that designers know how to evaluate and study their work. They need to communicate with their clients better on these issues and they must make what they do more transparent. In short: designers need to give away design (and not just through a weekend design thinking seminar).

Not every designer must get a Ph.D. in behavioural science, but they will need to know something about that domain if they are to work on matters of social and service design, for example. Designers don’t have to become professional evaluators, but they will need to know how to document and measure what they do and what impact it has on those touched by their designs. Understanding research — that includes a basic understanding of statistics, quantitative and qualitative methods — is another area that requires shoring up.

Designers don’t need to become researchers, but they must have research or evaluation literacy. Just as it is becoming increasingly unacceptable that program designers from fields like public policy and administration, public health, social services, and medicine lack understanding of design principles, so is it no longer feasible for designers to be ignorant of proper research methods.

It’s not impossible. Clinical psychologists went from being mostly practitioners to scientist-practitioners. Professional social workers are now well-versed in research even if they typically focus on policy and practice. Elevating the field of Design means accepting that being an effective professional requires certain skills and research and evaluation are now part of that skill set.

DesignWindow2.jpeg

Designing for elevated design

This doesn’t have to fall on designers to take up research — it can come from the very people who are attracted to Design Thinking. Psychologists, physicians, and organizational scientists (among others) all can provide the means to support designers in building their literacy in this area.

Adding research courses that go beyond ethnography and observation to give design students exposure to survey methods, secondary data analysis, ‘big data’, Grounded Theory approaches, and blended models for data collection are all options. Bring the behavioural and data scientists into the curriculum (and get designers into the curriculum training those professionals).

Create opportunities for designers to do research, publish, and present their research using the same ‘crit’ that they bring to their designs. Just as behavioural scientists expose themselves to peer review of their research, designers can do the same with their research. This is a golden opportunity for an exchange of ideas and skills between the design community and those in the program evaluation and research domains.

This last point is what the Design Loft initiative has sought to do. Now in its second year, the Design Loft is a training program aimed at exposing professional evaluators to design methods and tools. It’s not to train them as designers, but to increase their literacy and confidence in engaging with Design. The Design Loft can do the same thing with designers, training them in the methods and tools of evaluation. It’s but one example.

In an age where interdisciplinarity is spoken of frequently this provides the means to practically do it and in a way that offers a chance to elevate design much like the Eames’ did, Milton Glaser did, and how George Miller did for psychology. The time is now.

If you are interested in learning more about the Design Loft initiative, connect with Cense. If you’re a professional evaluator attending the 2017 American Evaluation Association conference in Washington, the Design Loft will be held on Friday, November 10th

Image Credit: Author

design thinkingevaluationinnovation

Beyond Bullshit for Design Thinking

QuestionLook.jpg

Design thinking is in its ‘bullshit’ phase, a time characterized by wild hype, popularity and little evidence of what it does, how it does it, or whether it can possibly deliver what it promises on a consistent basis. If design thinking is to be more than a fad it needs to get serious about answering some important questions and going from bullshit to bullish in tackling important innovation problems and the time is now. 

In a previous article, I described design thinking as being in its BS phase and that it was time for it to move on from that. Here, I articulate things that can help us there.

The title of that original piece was inspired by a recent talk by Pentagram partner, Natasha Jen, where she called out design thinking as “bullshit.” Design thinking offers much to those who haven’t been given or taken creative license in their work before. Its offered organizations that never saw themselves as ‘innovative’ a means to generate products and services that extend beyond the bounds of what they thought was possible. While design thinking has inspired people worldwide (as evidenced by the thousands of resources, websites, meetups, courses, and discussions devoted to the topic) the extent of its impact is largely unknown, overstated, and most certainly oversold as it has become a marketable commodity.

The comments and reaction to my related post on LinkedIn from designers around the world suggest that many agree with me.

So now what? Design thinking, like many fads and technologies that fit the hype cycle, is beset with a problem of inflated expectations driven by optimism and the market forces that bring a lot of poorly-conceived, untested products supported by ill-prepared and sometimes unscrupulous actors into the marketplace. To invoke Natasha Jen: there’s a lot of bullshit out there.

But there is also promising stuff. How do we nurture the positive benefits of this overall approach to problem finding, framing and solving and fix the deficiencies, misconceptions, and mistakes to make it better?

Let’s look at a few things that have the potential to transform design thinking from an over-hyped trend to something that brings demonstrable value to enterprises.

Show the work

ShowtheWork.jpg

The journey from science to design is a lesson in culture shock. Science typically begins its journey toward problem-solving by looking at what has been done before whereas a designer typically starts with what they know about materials and craft. Thus, an industrial designer may have never made a coffee mug before, but they know how to build things that meet clients’ desires within a set of constraints and thus feel comfortable undertaking this job. This wouldn’t happen in science.

Design typically uses a simple criterion above all others to judge the outcomes of its work: Is the client satisfied? So long as the time, budget, and other requirements are met, the key is ensuring that the client likes the product. Because this criterion is so heavily weighted on the outcome, designers often have little need to capture or share how they arrived at the outcome, just that they do it. Designers may also be reluctant to share this because this is their competitive advantage so there is an industry-specific culture that prevents people from opening their process to scrutiny.

Science requires that researchers open up their methods, tools, observations, and analytical strategy to view for others. The entire notion of peer review — which has its own set of flaws — is predicated on the notion that other qualified professionals can see how a solution was derived and provide comment on it. Scientific peer review is typically geared toward encouraging replication, however, it is also to allow others to assess the reasonableness of the claims. This is the critical part of peer review that requires scientists to adhere to a certain set of standards and show their work.

As design moves into a more social realm, designing systems, services, and policies for populations for whom there is no single ‘client’ and many diverse users, the need to show the work becomes imperative. Showing the work also allows for others to build the method. For example, design thinking speaks of ‘prototyping’, yet without a clear sense of what is prototyped, how it is prototyped, what means of assessing the value of the prototype is, and what options were considered (or discarded) in developing the prototype, it is impossible to tell if this was really the best idea of many or the one decided most feasible to try.

This might not matter for a coffee cup, but it matters a lot if you are designing a social housing plan, a transportation system, or a health service. Designers can borrow from scientists and become better at documenting what they do along the way, what ideas are generated (and dismissed), how decisions are made, and what creative avenues are explored along the route to a particular design choice. This not only improves accountability but increases the likelihood of better input and ‘crit’ from peers. This absence of ‘crit’ in design thinking is among the biggest ‘bullshit’ issues that Natasha Jen spoke of.

Articulate the skillset and toolset

Creating2.jpeg

What does it take to do ‘design thinking’? The caricature is that of the Post-it Notes, Lego, and whiteboards. These are valuable tools, but so are markers, paper, computer modeling software, communication tools like Slack or Trello, cameras, stickers…just about anything that allows data, ideas, and insights to be captured, organized, visualized, and transformed.

Using these tools also takes skill (despite how simple they are).

Facilitation is a key design skill when working with people and human-focused programs and services. So is conflict resolution. The ability to negotiate, discuss, sense-make, and reflect within the context of a group, a deadline, and other constraints is critical for bringing a design to life. These skills are not just for designers, but they have to reside within a design team.

There are other skills related to shaping aesthetics, manufacturing, service design, communication, and visual representation that can all contribute to a great design team and these need to be articulated as part of a design thinking process. Many ‘design thinkers’ will point to the ABC Nightline segment that aired in 1999 titled “The Deep Dive” as their first exposure to ‘design thinking’. It is also what thrust the design firm IDEO into the spotlight who, more than any single organization, is credited with popularizing design thinking through their work.

What gets forgotten when people look at this program where designers created a shopping cart in just a few days was that IDEO brought together a highly skilled interdisciplinary team that included engineers, business analysts, and a psychologist. Much of the design thinking advocacy work out there talks about ‘diversity’, but that matters only when you have a diversity of perspectives, but also technical and scholarly expertise to make use of those perspectives. How often are design teams taking on human service programs aimed at changing behaviour without any behavioural scientists involved? How often are products created without any care to the aesthetics of the product because there wasn’t a graphic designer or artist on the team?

Does this matter if you’re using design thinking to shape the company holiday party? Probably not. Does it if you are shaping how to deliver healthcare to an underserved community? Yes.

Design thinking can require general and specific skillsets and toolsets and these are not generic.

Develop theory

DesignerUserManufacturer.jpg

A theory is not just the provenance of eggheaded nerds and something you had to endure in your college courses on social science. It matters when it’s done well. Why? As Kurt Lewin, one of the most influential applied social psychologists of the 20th century said: “There is nothing so practical as a good theory.”

A theory allows you to explain why something happens, how causal connections may form, and what the implications of specific actions are in the world. They are ideas, often grounded in evidence and other theories, about how things work. Good theories can guide what we do and help us focus what we need to pay attention to. They can be wrong or incomplete, but when done well a theory provides us the means to explain what happens and can happen. Without it, we are left trying to explain the outcomes of actions and have little recourse for repeating, correcting, or redesigning what we do because we have no idea why something happened. Rarely — in human systems — is evidence for cause-and-effect so clear cut without some theorizing.

Design thinking is not entirely without theory. Some scholars have pulled together evidence and theory to articulate ways to generate ideas, decision rules for focusing attention, and there are some well-documented examples for guiding prototype development. However, design thinking itself — like much of design — is not strong on theory. There isn’t a strong theoretical basis to ascertain why something produces an effect based on a particular social process, or tool, or approach. As such, it’s hard to replicate such things, determine where something succeeded or where improvements need to be made.

It’s also hard to explain why design thinking should be any better than anything else that aims to enkindle innovation. By developing theory, designers and design thinkers will be better equipped to advance its practice and guide the focus of evaluation. Further, it will help explain what design thinking does, can do, and why it might be suited (or ill-suited) to a particular problem set.

It also helps guide the development of research and evaluation scholarship that will build the evidence for design thinking.

Create and use evidence

Research2Narrow.jpg

Jeanne Leidtka and her colleagues at the Darden School of Business have been among the few to conduct systematic research into the use of design thinking and its impact. The early research suggests it offers benefit to companies and non-profits seeking to innovate. This is a start, but far more research is needed by more groups if we are to build a real corpus of knowledge to shape practice more fully. Leidtka’s work is setting the pace for where we can go and design thinkers owe her much thanks for getting things moving. It’s time for designers, researchers and their clients to join her.

Research typically begins with taking ‘ideal’ cases to ensure sufficient control, influence and explanatory power become more possible. If programs are ill-defined, poorly resourced, focus on complex or dynamic problems, have no clear timeline for delivery or expected outcomes, and lack the resources or leadership that has them documenting the work that is done, it is difficult to impossible to tell what kind of role design thinking plays amid myriad factors.

An increasing amount of design thinking — in education, international development, social innovation, public policy to name a few domains of practice — is applied in this environmental context. This is the messy area of life where research aimed at looking for linear cause-and-effect relationships and ‘proof’ falters, yet it’s also where the need for evidence is great. Researchers tend to avoid looking at these contexts because the results are rarely clear, the study designs require much energy, money, talent, and sophistication, and the ability to publish findings in top-tier journals all the more compromised as a result.

Despite this, there is enormous potential for qualitative, quantitative, mixed-method, and even simulation research that isn’t being conducted into design thinking. This is partly because designers aren’t trained in these methods, but also because (I suspect) there is a reticence by many to opening up design thinking to scrutiny. Like anything on the hype cycle: design thinking is a victim of over-inflated claims of what it does, but that doesn’t necessarily mean it’s not offering a lot.

Design schools need to start training students in research methods beyond (in my opinion) the weak, simplistic approaches to ethnographic methods, surveys and interviews that are currently on offer. If design thinking is to be considered serious, it requires serious methodological training. Further, designers don’t need to be the most skilled researchers on the team: that’s what behavioural scientists bring. Bringing in the kind of expertise required to do the work necessary is important if design thinking is to grow beyond it’s ‘bullshit’ phase.

Evaluate impact

DesignSavingtheWorld

From Just Design by Christopher Simmons

Lastly, if we are going to claim that design is going to change the world, we need to back that up with evaluation data. Chances are decent that design thinking is changing the world, but maybe not in the ways we always think or hope, or in the quantity or quality we expect. Without evaluation, we simply don’t know.

Evaluation is about understanding how something operates in the world and what its impact is. Evaluators help articulate the value that something brings and can support innovators (design thinkers?) in making strategic decisions about what to do when to do it, and how to allocate resources.

The only time evaluation was used in my professional design training was when I mentioned it in class. That’s it. Few design programs of any discipline offer exposure to the methods and approaches of evaluation, which is unfortunate. Until last year, professional evaluators weren’t much better with most having limited exposure to design and design thinking.

That changed with the development of the Design Loft initiative that is now in its second year. The Design Loft was a pop-up conference designed and delivered by me (Cameron Norman) and co-developed with John Gargani, then President of the American Evaluation Association. The event provided a series of short-burst workshops on select design methods and tools as a means of orienting evaluators to design and how they might apply it to their work.

This is part of a larger effort to bring design and evaluation closer together. Design and design thinking offers an enormous amount of potential for innovation creation and evaluation brings the tools to assess what kind of impact those innovations have.

Getting bullish on design

I’ve witnessed firsthand how design (and the design thinking approach) has inspired people who didn’t think of themselves as creative, innovative, or change-makers do things that brought joy to their work. Design thinking can be transformative for those who are exposed to new ways of seeing problems, conceptualizing solutions, and building something. I’d hate to see that passion disappear.

That will happen once design thinking starts losing out to the next fad. Remember the lean methodology? How about Agile? Maybe the design sprint? These are distinct approaches, but share much in common with design thinking. Depending on who you talk to they might be the same thing. Blackbelts, unconferences, design jams, innovation labs, and beyond are all part of the hodgepodge of offerings competing for the attention of companies, governments, healthcare, and non-profits seeking to innovate.

What matters most is adding value. Whether this is through ‘design thinking’ or something else, what matters is that design — the creation of products, services, policies, and experiences that people value — is part of the innovation equation. It’s why I like the term ‘design thinking’ relative to others operating in the innovation development space simply because it acknowledges the practice of design in its name.

Designers rightfully can claim ‘design thinking’ as a concept that is — broadly defined –central, but far from complete to their work. Working with the very groups that have taken the idea of our design and applied it to business, education, and so many other sectors, it’s time those with a stake in seeing better design and better thinking about what we design flourish to take design thinking beyond its bullshit phase and make it bullish about innovation.

For those interested in evaluation and design, check out the 2017 Design Loft micro-conference taking place on Friday, November 10th within the American Evaluation Association’s annual convention in Washington, DC . Look for additional events, training and support for design thinking, evaluation and strategy by following @CenseLtd on Twitter with updates about the Design Loft and visiting Cense online. 

Image credits: Author. The ‘Design Will Save The World’ images were taken from the pages of Christopher Simmons’ book Just Design.