Tag: theory

design thinkingevaluationinnovation

Beyond Bullshit for Design Thinking

QuestionLook.jpg

Design thinking is in its ‘bullshit’ phase, a time characterized by wild hype, popularity and little evidence of what it does, how it does it, or whether it can possibly deliver what it promises on a consistent basis. If design thinking is to be more than a fad it needs to get serious about answering some important questions and going from bullshit to bullish in tackling important innovation problems and the time is now. 

In a previous article, I described design thinking as being in its BS phase and that it was time for it to move on from that. Here, I articulate things that can help us there.

The title of that original piece was inspired by a recent talk by Pentagram partner, Natasha Jen, where she called out design thinking as “bullshit.” Design thinking offers much to those who haven’t been given or taken creative license in their work before. Its offered organizations that never saw themselves as ‘innovative’ a means to generate products and services that extend beyond the bounds of what they thought was possible. While design thinking has inspired people worldwide (as evidenced by the thousands of resources, websites, meetups, courses, and discussions devoted to the topic) the extent of its impact is largely unknown, overstated, and most certainly oversold as it has become a marketable commodity.

The comments and reaction to my related post on LinkedIn from designers around the world suggest that many agree with me.

So now what? Design thinking, like many fads and technologies that fit the hype cycle, is beset with a problem of inflated expectations driven by optimism and the market forces that bring a lot of poorly-conceived, untested products supported by ill-prepared and sometimes unscrupulous actors into the marketplace. To invoke Natasha Jen: there’s a lot of bullshit out there.

But there is also promising stuff. How do we nurture the positive benefits of this overall approach to problem finding, framing and solving and fix the deficiencies, misconceptions, and mistakes to make it better?

Let’s look at a few things that have the potential to transform design thinking from an over-hyped trend to something that brings demonstrable value to enterprises.

Show the work

ShowtheWork.jpg

The journey from science to design is a lesson in culture shock. Science typically begins its journey toward problem-solving by looking at what has been done before whereas a designer typically starts with what they know about materials and craft. Thus, an industrial designer may have never made a coffee mug before, but they know how to build things that meet clients’ desires within a set of constraints and thus feel comfortable undertaking this job. This wouldn’t happen in science.

Design typically uses a simple criterion above all others to judge the outcomes of its work: Is the client satisfied? So long as the time, budget, and other requirements are met, the key is ensuring that the client likes the product. Because this criterion is so heavily weighted on the outcome, designers often have little need to capture or share how they arrived at the outcome, just that they do it. Designers may also be reluctant to share this because this is their competitive advantage so there is an industry-specific culture that prevents people from opening their process to scrutiny.

Science requires that researchers open up their methods, tools, observations, and analytical strategy to view for others. The entire notion of peer review — which has its own set of flaws — is predicated on the notion that other qualified professionals can see how a solution was derived and provide comment on it. Scientific peer review is typically geared toward encouraging replication, however, it is also to allow others to assess the reasonableness of the claims. This is the critical part of peer review that requires scientists to adhere to a certain set of standards and show their work.

As design moves into a more social realm, designing systems, services, and policies for populations for whom there is no single ‘client’ and many diverse users, the need to show the work becomes imperative. Showing the work also allows for others to build the method. For example, design thinking speaks of ‘prototyping’, yet without a clear sense of what is prototyped, how it is prototyped, what means of assessing the value of the prototype is, and what options were considered (or discarded) in developing the prototype, it is impossible to tell if this was really the best idea of many or the one decided most feasible to try.

This might not matter for a coffee cup, but it matters a lot if you are designing a social housing plan, a transportation system, or a health service. Designers can borrow from scientists and become better at documenting what they do along the way, what ideas are generated (and dismissed), how decisions are made, and what creative avenues are explored along the route to a particular design choice. This not only improves accountability but increases the likelihood of better input and ‘crit’ from peers. This absence of ‘crit’ in design thinking is among the biggest ‘bullshit’ issues that Natasha Jen spoke of.

Articulate the skillset and toolset

Creating2.jpeg

What does it take to do ‘design thinking’? The caricature is that of the Post-it Notes, Lego, and whiteboards. These are valuable tools, but so are markers, paper, computer modeling software, communication tools like Slack or Trello, cameras, stickers…just about anything that allows data, ideas, and insights to be captured, organized, visualized, and transformed.

Using these tools also takes skill (despite how simple they are).

Facilitation is a key design skill when working with people and human-focused programs and services. So is conflict resolution. The ability to negotiate, discuss, sense-make, and reflect within the context of a group, a deadline, and other constraints is critical for bringing a design to life. These skills are not just for designers, but they have to reside within a design team.

There are other skills related to shaping aesthetics, manufacturing, service design, communication, and visual representation that can all contribute to a great design team and these need to be articulated as part of a design thinking process. Many ‘design thinkers’ will point to the ABC Nightline segment that aired in 1999 titled “The Deep Dive” as their first exposure to ‘design thinking’. It is also what thrust the design firm IDEO into the spotlight who, more than any single organization, is credited with popularizing design thinking through their work.

What gets forgotten when people look at this program where designers created a shopping cart in just a few days was that IDEO brought together a highly skilled interdisciplinary team that included engineers, business analysts, and a psychologist. Much of the design thinking advocacy work out there talks about ‘diversity’, but that matters only when you have a diversity of perspectives, but also technical and scholarly expertise to make use of those perspectives. How often are design teams taking on human service programs aimed at changing behaviour without any behavioural scientists involved? How often are products created without any care to the aesthetics of the product because there wasn’t a graphic designer or artist on the team?

Does this matter if you’re using design thinking to shape the company holiday party? Probably not. Does it if you are shaping how to deliver healthcare to an underserved community? Yes.

Design thinking can require general and specific skillsets and toolsets and these are not generic.

Develop theory

DesignerUserManufacturer.jpg

A theory is not just the provenance of eggheaded nerds and something you had to endure in your college courses on social science. It matters when it’s done well. Why? As Kurt Lewin, one of the most influential applied social psychologists of the 20th century said: “There is nothing so practical as a good theory.”

A theory allows you to explain why something happens, how causal connections may form, and what the implications of specific actions are in the world. They are ideas, often grounded in evidence and other theories, about how things work. Good theories can guide what we do and help us focus what we need to pay attention to. They can be wrong or incomplete, but when done well a theory provides us the means to explain what happens and can happen. Without it, we are left trying to explain the outcomes of actions and have little recourse for repeating, correcting, or redesigning what we do because we have no idea why something happened. Rarely — in human systems — is evidence for cause-and-effect so clear cut without some theorizing.

Design thinking is not entirely without theory. Some scholars have pulled together evidence and theory to articulate ways to generate ideas, decision rules for focusing attention, and there are some well-documented examples for guiding prototype development. However, design thinking itself — like much of design — is not strong on theory. There isn’t a strong theoretical basis to ascertain why something produces an effect based on a particular social process, or tool, or approach. As such, it’s hard to replicate such things, determine where something succeeded or where improvements need to be made.

It’s also hard to explain why design thinking should be any better than anything else that aims to enkindle innovation. By developing theory, designers and design thinkers will be better equipped to advance its practice and guide the focus of evaluation. Further, it will help explain what design thinking does, can do, and why it might be suited (or ill-suited) to a particular problem set.

It also helps guide the development of research and evaluation scholarship that will build the evidence for design thinking.

Create and use evidence

Research2Narrow.jpg

Jeanne Leidtka and her colleagues at the Darden School of Business have been among the few to conduct systematic research into the use of design thinking and its impact. The early research suggests it offers benefit to companies and non-profits seeking to innovate. This is a start, but far more research is needed by more groups if we are to build a real corpus of knowledge to shape practice more fully. Leidtka’s work is setting the pace for where we can go and design thinkers owe her much thanks for getting things moving. It’s time for designers, researchers and their clients to join her.

Research typically begins with taking ‘ideal’ cases to ensure sufficient control, influence and explanatory power become more possible. If programs are ill-defined, poorly resourced, focus on complex or dynamic problems, have no clear timeline for delivery or expected outcomes, and lack the resources or leadership that has them documenting the work that is done, it is difficult to impossible to tell what kind of role design thinking plays amid myriad factors.

An increasing amount of design thinking — in education, international development, social innovation, public policy to name a few domains of practice — is applied in this environmental context. This is the messy area of life where research aimed at looking for linear cause-and-effect relationships and ‘proof’ falters, yet it’s also where the need for evidence is great. Researchers tend to avoid looking at these contexts because the results are rarely clear, the study designs require much energy, money, talent, and sophistication, and the ability to publish findings in top-tier journals all the more compromised as a result.

Despite this, there is enormous potential for qualitative, quantitative, mixed-method, and even simulation research that isn’t being conducted into design thinking. This is partly because designers aren’t trained in these methods, but also because (I suspect) there is a reticence by many to opening up design thinking to scrutiny. Like anything on the hype cycle: design thinking is a victim of over-inflated claims of what it does, but that doesn’t necessarily mean it’s not offering a lot.

Design schools need to start training students in research methods beyond (in my opinion) the weak, simplistic approaches to ethnographic methods, surveys and interviews that are currently on offer. If design thinking is to be considered serious, it requires serious methodological training. Further, designers don’t need to be the most skilled researchers on the team: that’s what behavioural scientists bring. Bringing in the kind of expertise required to do the work necessary is important if design thinking is to grow beyond it’s ‘bullshit’ phase.

Evaluate impact

DesignSavingtheWorld

From Just Design by Christopher Simmons

Lastly, if we are going to claim that design is going to change the world, we need to back that up with evaluation data. Changes are, design thinking is changing the world, but maybe not in the ways we always think or hope, or in the quantity or quality we expect. Without evaluation, we simply don’t know.

Evaluation is about understanding how something operates in the world and what its impact is. Evaluators help articulate the value that something brings and can support innovators (design thinkers?) in making strategic decisions about what to do when to do it, and how to allocate resources.

The only time evaluation was used in my professional design training was when I mentioned it in class. That’s it. Few design programs of any discipline offer exposure to the methods and approaches of evaluation, which is unfortunate. Until last year, professional evaluators weren’t much better with most having limited exposure to design and design thinking.

That changed with the development of the Design Loft initiative that is now in its second year. The Design Loft was a pop-up conference designed and delivered by me (Cameron Norman) and co-developed with John Gargani, then President of the American Evaluation Association. The event provided a series of short-burst workshops on select design methods and tools as a means of orienting evaluators to design and how they might apply it to their work.

This is part of a larger effort to bring design and evaluation closer together. Design and design thinking offers an enormous amount of potential for innovation creation and evaluation brings the tools to assess what kind of impact those innovations have.

Getting bullish on design

I’ve witnessed firsthand how design (and the design thinking approach) has inspired people who didn’t think of themselves as creative, innovative, or change-makers do things that brought joy to their work. Design thinking can be transformative for those who are exposed to new ways of seeing problems, conceptualizing solutions, and building something. I’d hate to see that passion disappear.

That will happen once design thinking starts losing out to the next fad. Remember the lean methodology? How about Agile? Maybe the design sprint? These are distinct approaches, but share much in common with design thinking. Depending on who you talk to they might be the same thing. Blackbelts, unconferences, design jams, innovation labs, and beyond are all part of the hodgepodge of offerings competing for the attention of companies, governments, healthcare, and non-profits seeking to innovate.

What matters most is adding value. Whether this is through ‘design thinking’ or something else, what matters is that design — the creation of products, services, policies, and experiences that people value — is part of the innovation equation. It’s why I like the term ‘design thinking’ relative to others operating in the innovation development space simply because it acknowledges the practice of design in its name.

Designers rightfully can claim ‘design thinking’ as a concept that is — broadly defined –central, but far from complete to their work. Working with the very groups that have taken the idea of our design and applied it to business, education, and so many other sectors, it’s time those with a stake in seeing better design and better thinking about what we design flourish to take design thinking beyond its bullshit phase and make it bullish about innovation.

For those interested in evaluation and design, check out the 2017 Design Loft micro-conference taking place on Friday, November 10th within the American Evaluation Association’s annual convention in Washington, DC . Look for additional events, training and support for design thinking, evaluation and strategy by following @CenseLtd on Twitter with updates about the Design Loft and visiting Cense online. 

Image credits: Author. The ‘Design Will Save The World’ images were taken from the pages of Christopher Simmons’ book Just Design.

complexitydesign thinkingevaluationinnovationsystems thinking

Developmental Evaluation and Complexity

Stitch of Complexity

Stitch of Complexity

Developmental evaluation is an approach (much like design thinking) to program assessment and valuation in domains of high complexity, change, and innovation. These three terms are used often, but poorly understood in real terms for evaluators to make much use of. This first in a series looks at the term complexity and what it means in the context of developmental evaluation. 

Science writer and professor Neil Johnson  is quoted as saying: “even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples…” and that his definition of a science of complexity (PDF) is:  “the study of the phenomena which emerge from a collection of interacting objects.”  The title of his book Two’s Company, Three’s Complexity hints at what complexity can mean to anyone who’s tried to make plans with more than one other person.

The Oxford English Dictionary defines complexity as:

complexity |kəmˈpleksitē|

noun (pl. complexities)

the state or quality of being intricate or complicated: an issue of great complexity.

• (usu. complexities) a factor involved in a complicated process or situation: the complexities of family life.

For social programs, complexity involves multiple overlapping sources of input and outputs that interact with systems in dynamic ways at multiple time scales and organizational levels in ways that are highly context-dependent. Thats a mouthful.

Developmental evaluation is intended to be an approach that takes complexity into account, however that also means that evaluators and the program designers that they work with need to understand some basics about complexity. To that end, here are some key concepts to start that journey.

Key complexity concepts

Complexity science is a big and complicated domain within systems thinking that brings together elements of system dynamics, organizational behaviour, network science, information theory, and computational modeling (among others).  Although complexity has many facets, there are some key concepts that are of particular relevance to program designers and evaluators, which will be introduced with discussion on what they mean for evaluation.

Non-linearity: The most central start point for complexity is that it is about non-linearity. That means prediction and control is often not possible, perhaps harmful, or at least not useful as ideas for understanding programs operating in complex environments. Further complicating things is that within the overall non-linear environment there exist linear components. It doesn’t mean that evaluators can’t use any traditional means of understanding programs, instead it means that they need to consider what parts of the program are amenable to linear means of intervention and understanding within the complex milieu. This means surrendering the notion of ongoing improvement and embracing development as an idea. Michael Quinn Patton has written about this distinction very well in his terrific book on developmental evaluation. Development is about adaptation to produce advantageous effects for the existing conditions, improvement is about tweaking the same model to produce the same effects across conditions that are assumed to be stable.

Feedback: Complex systems are dynamic and that dynamism is created in part from feedback. Feedback is essentially information that comes from the systems’ history and present actions that shape the immediate and longer-term future actions. An action leads to an effect which is sensed, made sense of, which leads to possible adjustments that shape future actions. For evaluators, we need to know what feedback mechanisms are in place, how they might operate, and what (if any) sensemaking rubrics, methods and processes are used with this feedback to understand what role it has in shaping decisions and actions about a program. This is important because it helps track the non-linear connections between causes and effects allowing the evaluator to understand what might emerge from particular activities.

Emergence: What comes from feedback in a complex system are new patterns of behaviour and activity. Due to the ongoing, changing intensity, quantity and quality of information generated by the system variables, the feedback may look different each time an evaluator looks at it. What comes from this differential feedback can be new patterns of behaviour that are dependent on the variability in the information and this is called emergence. Evaluation designs need to be in place that enable the evaluator to see emergent patterns form, which means setting up data systems that have the appropriate sensitivity. This means knowing the programs, the environments they are operating in, and doing advanced ‘ground-work’ preparing for the evaluation by consulting program stakeholders, the literature and doing preliminary observational research. It requires evaluators to know — or at least have some idea — of what the differences are that make a difference. That means knowing first what patterns exist, detecting what changes in those patterns, and understanding if those changes are meaningful.

Adaptation: With these new patterns and sensemaking processes in place, programs will consciously or unconsciously adapt to the changes created through the system. If a program itself is operating in an environment where complexity is part of the social, demographic, or economic environment even a stable, consistently run program will require adaptation to simply stay in the same place because the environment is moving. This means sufficiently detailed record-keeping is needed — whether through program documents, reflective practice notes, meeting minutes, observations etc.. — to monitor what current practice is, link it with the decisions made using the feedback, emergent conditions and sensemaking from the previous stages and then tracking what happens next.

Attractors: Not all of the things that emerge are useful and not all feedback is supportive of advancing a program’s goals. Attractors are patterns of activity that generate emergent behaviours and ‘attract’ resources — attention, time, funding — in a program. Developmental evaluators and their program clients seek to find attractors that are beneficial to the organization and amplify those to ensure sustained or possibly greater benefit. Negative (unhelpful) attractors do the opposite and thus knowing when those form it enables program staff to dampen their effect by adapting activities to adjust and shift these activities.

Self-organization and Co-evolution: Tied with all of this is the concepts of self-organization and co-evolution. The previous concepts all come together to create systems that self-organize around these attractors. Complex systems do not allow us to control and predict behaviour, but we can direct actions, shape the system to some degree, and anticipate possible outcomes. Co-evolution is a bit of a misnomer in that it refers to the principles that organisms (and organizations) operating in complex environments are mutually affected by each other. This mutual influence might be different for each interaction, differently effecting each organization/organism as well, but it points to the notion that we do not exist in a vacuum. For evaluators, this means paying attention to the system(s) that the organization is operating in. Whereas with normative, positivist science we aim to reduce ‘noise’ and control for variation, in complex systems we can’t do this. Network research, system mapping tools like causal loop diagrams and system dynamics models, gigamapping, or simple environmental scans can all contribute to the evaluation to enable the developmental evaluator to know what forces might be influencing the program.

Ways of thinking about complexity

One of the most notable challenges for developmental evaluators and those seeking to employ developmental evaluation is the systems thinking about complexity. It means accepting non-linearity as a key principle in viewing a program and its context. It also means that context must be accounted for in the evaluation design. Simplistic assertions about methodological approaches (“I’m a qualitative evaluator / I’m a quantitative evaluator“) will not work. Complex programs require attention to the macro level contexts and moment-by-moment activities simultaneously and at the very least demand mixed method approaches to their understanding.

Although much of the science of complexity is based on highly mathematical, quantitative science, it’s practice as a means of understanding programs is quantitative and qualitative and synthetic. It requires attention to context and the nuances that qualitative methods can reveal and the macro-level understanding that quantitative data can produce from many interactions.

It also means getting away from language about program improvements towards one of development and that might be the hardest part of the entire process. Development requires adaptation to the program, thought and rethinking about the program’s resources and processes that integrate feedback into an ongoing set of adjustments that perpetuate through the life cycle of the program. This requires a different kind of attention,  methods, and commitment from both a program and its evaluators.

In the coming posts I’ll look at how this attention gets realized in designing and redesigning the program as we move into developmental design.

 

design thinkingeducation & learning

Hacking the Classroom: Beyond Design Thinking

A nice summation of what Design Thinking is and how its been applied elsewhere with an eye towards education. This is shared from the User Generated Education blog.

User Generated Education

Design Thinking is trending is some educational circles.  Edutopia recently ran a design thinking for educators workshop and I attended two great workshops at SXSWedu 2013 on Design Thinking:

Design Thinking is a great skill for students to acquire as part of their education.  But it is one process like the problem-solving model or the scientific method.   As a step-by-step process, it becomes type of box.  Sometimes we need to go beyond that box; step outside of the box.  This post provides an overview of design thinking, the problems with design thinking, and suggestions to hacking the world to go beyond design thinking.

Design Thinking

Design thinking is an approach to learning that includes considering real-world problems, research, analysis, conceiving original ideas, lots of experimentation, and sometimes building things by hand (http://blogs.kqed.org/mindshift/2013/03/what-does-design-thinking-look-like-in-school). The following graphic…

View original post 1,182 more words

design thinkingeducation & learninginnovation

Design: A Stance for Competitive Advantage

 

Earlier this week I attended a presentation by Rotman School of Management Dean and design-thinking advocate Roger Martin. The talk, given as part of Torch Partnership’s Unfinished Business lecture series put on with S-Lab, was titled: The Design of Business: Why Design Thinking is the Next Competitive Advantage

The presentation provided some clear-headed thinking about design and managed to reduce the concept of design thinking into something very simple, without being simplistic. This was, not surprisingly, done by design. As Martin himself stated:

Our knowledge moves forward when we leave things out

In research we are often seduced by our data and the volume of potential information it can provide. If we have enough of it, twist it, mine it or manipulate it the right way, we can find answers. Certainly there are areas where this kind of thinking is useful. Genomics appears to be one of them – – at least, as far as discovering potential relationships and systems of organizing goes, gene expressions may never be fully understood through quantitative means alone. But the complexity in human systems seems more fraught with information overload and rarely, if ever, does volumes of information lead to better understanding. Indeed, as Martin suggests, sometimes we need to apply design thinking not to generate more information, but reduce it.

Qualitative researchers know this all to well. So do great artists. The latter point is brought home all too much this week as Toronto hosts Hot Docs, the Canadian International Documentary Film Festival. I’ve seen about a dozen documentaries so far and most of them were, in the opinion of me and my fellow theatregoers, too long (that is, they could have left things out).

But like art and qualitative inquiry (and the theories that underpin both), design thinking can be viewed much less as something that you do, but rather a way of positioning oneself relative to the topic of interest. As one audience member proposed:

Design thinking isn’t a theory of activity, or a method, but a stance

To my mind this may be the best description of design thinking I’ve heard. While there are certainly methods of using design, and strategies that firms such as IDEO, BMW DesignWorks, and Porsche Design use it is the particular stance that designers take that enable those methods to translate across settings, issues, and time horizons.

Interestingly, the discussion about design then shifted to the kind of training one needs to foster the ability to take a stance in a particular manner, not just use tools and theories. When polled about whether they had any training in thinking approaches, less than 5 per cent (estimate) of the audience said that they had and it was speculated that this was because those people had gone to private school or some other specialized training program as children (e.g., schools for the gifted) where such high-level cognitive skills are taught (which is also the foundation for the Rotman School of Management’s approach to teaching).

So here we have a skill or stance in perspective taking that is viewed as a competitive advantage, a means of advancing more humane products and systems, yet is taught to a very small number of people. It seems that should be turned on its head and that we need to consider teaching thinking as a core feature of our educational programs.

Imagine? Teaching people to think in order to do instead of to do and not to think.