Tag: innovation

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

design thinkingevaluationinnovation

Beyond Bullshit for Design Thinking

QuestionLook.jpg

Design thinking is in its ‘bullshit’ phase, a time characterized by wild hype, popularity and little evidence of what it does, how it does it, or whether it can possibly deliver what it promises on a consistent basis. If design thinking is to be more than a fad it needs to get serious about answering some important questions and going from bullshit to bullish in tackling important innovation problems and the time is now. 

In a previous article, I described design thinking as being in its BS phase and that it was time for it to move on from that. Here, I articulate things that can help us there.

The title of that original piece was inspired by a recent talk by Pentagram partner, Natasha Jen, where she called out design thinking as “bullshit.” Design thinking offers much to those who haven’t been given or taken creative license in their work before. Its offered organizations that never saw themselves as ‘innovative’ a means to generate products and services that extend beyond the bounds of what they thought was possible. While design thinking has inspired people worldwide (as evidenced by the thousands of resources, websites, meetups, courses, and discussions devoted to the topic) the extent of its impact is largely unknown, overstated, and most certainly oversold as it has become a marketable commodity.

The comments and reaction to my related post on LinkedIn from designers around the world suggest that many agree with me.

So now what? Design thinking, like many fads and technologies that fit the hype cycle, is beset with a problem of inflated expectations driven by optimism and the market forces that bring a lot of poorly-conceived, untested products supported by ill-prepared and sometimes unscrupulous actors into the marketplace. To invoke Natasha Jen: there’s a lot of bullshit out there.

But there is also promising stuff. How do we nurture the positive benefits of this overall approach to problem finding, framing and solving and fix the deficiencies, misconceptions, and mistakes to make it better?

Let’s look at a few things that have the potential to transform design thinking from an over-hyped trend to something that brings demonstrable value to enterprises.

Show the work

ShowtheWork.jpg

The journey from science to design is a lesson in culture shock. Science typically begins its journey toward problem-solving by looking at what has been done before whereas a designer typically starts with what they know about materials and craft. Thus, an industrial designer may have never made a coffee mug before, but they know how to build things that meet clients’ desires within a set of constraints and thus feel comfortable undertaking this job. This wouldn’t happen in science.

Design typically uses a simple criterion above all others to judge the outcomes of its work: Is the client satisfied? So long as the time, budget, and other requirements are met, the key is ensuring that the client likes the product. Because this criterion is so heavily weighted on the outcome, designers often have little need to capture or share how they arrived at the outcome, just that they do it. Designers may also be reluctant to share this because this is their competitive advantage so there is an industry-specific culture that prevents people from opening their process to scrutiny.

Science requires that researchers open up their methods, tools, observations, and analytical strategy to view for others. The entire notion of peer review — which has its own set of flaws — is predicated on the notion that other qualified professionals can see how a solution was derived and provide comment on it. Scientific peer review is typically geared toward encouraging replication, however, it is also to allow others to assess the reasonableness of the claims. This is the critical part of peer review that requires scientists to adhere to a certain set of standards and show their work.

As design moves into a more social realm, designing systems, services, and policies for populations for whom there is no single ‘client’ and many diverse users, the need to show the work becomes imperative. Showing the work also allows for others to build the method. For example, design thinking speaks of ‘prototyping’, yet without a clear sense of what is prototyped, how it is prototyped, what means of assessing the value of the prototype is, and what options were considered (or discarded) in developing the prototype, it is impossible to tell if this was really the best idea of many or the one decided most feasible to try.

This might not matter for a coffee cup, but it matters a lot if you are designing a social housing plan, a transportation system, or a health service. Designers can borrow from scientists and become better at documenting what they do along the way, what ideas are generated (and dismissed), how decisions are made, and what creative avenues are explored along the route to a particular design choice. This not only improves accountability but increases the likelihood of better input and ‘crit’ from peers. This absence of ‘crit’ in design thinking is among the biggest ‘bullshit’ issues that Natasha Jen spoke of.

Articulate the skillset and toolset

Creating2.jpeg

What does it take to do ‘design thinking’? The caricature is that of the Post-it Notes, Lego, and whiteboards. These are valuable tools, but so are markers, paper, computer modeling software, communication tools like Slack or Trello, cameras, stickers…just about anything that allows data, ideas, and insights to be captured, organized, visualized, and transformed.

Using these tools also takes skill (despite how simple they are).

Facilitation is a key design skill when working with people and human-focused programs and services. So is conflict resolution. The ability to negotiate, discuss, sense-make, and reflect within the context of a group, a deadline, and other constraints is critical for bringing a design to life. These skills are not just for designers, but they have to reside within a design team.

There are other skills related to shaping aesthetics, manufacturing, service design, communication, and visual representation that can all contribute to a great design team and these need to be articulated as part of a design thinking process. Many ‘design thinkers’ will point to the ABC Nightline segment that aired in 1999 titled “The Deep Dive” as their first exposure to ‘design thinking’. It is also what thrust the design firm IDEO into the spotlight who, more than any single organization, is credited with popularizing design thinking through their work.

What gets forgotten when people look at this program where designers created a shopping cart in just a few days was that IDEO brought together a highly skilled interdisciplinary team that included engineers, business analysts, and a psychologist. Much of the design thinking advocacy work out there talks about ‘diversity’, but that matters only when you have a diversity of perspectives, but also technical and scholarly expertise to make use of those perspectives. How often are design teams taking on human service programs aimed at changing behaviour without any behavioural scientists involved? How often are products created without any care to the aesthetics of the product because there wasn’t a graphic designer or artist on the team?

Does this matter if you’re using design thinking to shape the company holiday party? Probably not. Does it if you are shaping how to deliver healthcare to an underserved community? Yes.

Design thinking can require general and specific skillsets and toolsets and these are not generic.

Develop theory

DesignerUserManufacturer.jpg

A theory is not just the provenance of eggheaded nerds and something you had to endure in your college courses on social science. It matters when it’s done well. Why? As Kurt Lewin, one of the most influential applied social psychologists of the 20th century said: “There is nothing so practical as a good theory.”

A theory allows you to explain why something happens, how causal connections may form, and what the implications of specific actions are in the world. They are ideas, often grounded in evidence and other theories, about how things work. Good theories can guide what we do and help us focus what we need to pay attention to. They can be wrong or incomplete, but when done well a theory provides us the means to explain what happens and can happen. Without it, we are left trying to explain the outcomes of actions and have little recourse for repeating, correcting, or redesigning what we do because we have no idea why something happened. Rarely — in human systems — is evidence for cause-and-effect so clear cut without some theorizing.

Design thinking is not entirely without theory. Some scholars have pulled together evidence and theory to articulate ways to generate ideas, decision rules for focusing attention, and there are some well-documented examples for guiding prototype development. However, design thinking itself — like much of design — is not strong on theory. There isn’t a strong theoretical basis to ascertain why something produces an effect based on a particular social process, or tool, or approach. As such, it’s hard to replicate such things, determine where something succeeded or where improvements need to be made.

It’s also hard to explain why design thinking should be any better than anything else that aims to enkindle innovation. By developing theory, designers and design thinkers will be better equipped to advance its practice and guide the focus of evaluation. Further, it will help explain what design thinking does, can do, and why it might be suited (or ill-suited) to a particular problem set.

It also helps guide the development of research and evaluation scholarship that will build the evidence for design thinking.

Create and use evidence

Research2Narrow.jpg

Jeanne Leidtka and her colleagues at the Darden School of Business have been among the few to conduct systematic research into the use of design thinking and its impact. The early research suggests it offers benefit to companies and non-profits seeking to innovate. This is a start, but far more research is needed by more groups if we are to build a real corpus of knowledge to shape practice more fully. Leidtka’s work is setting the pace for where we can go and design thinkers owe her much thanks for getting things moving. It’s time for designers, researchers and their clients to join her.

Research typically begins with taking ‘ideal’ cases to ensure sufficient control, influence and explanatory power become more possible. If programs are ill-defined, poorly resourced, focus on complex or dynamic problems, have no clear timeline for delivery or expected outcomes, and lack the resources or leadership that has them documenting the work that is done, it is difficult to impossible to tell what kind of role design thinking plays amid myriad factors.

An increasing amount of design thinking — in education, international development, social innovation, public policy to name a few domains of practice — is applied in this environmental context. This is the messy area of life where research aimed at looking for linear cause-and-effect relationships and ‘proof’ falters, yet it’s also where the need for evidence is great. Researchers tend to avoid looking at these contexts because the results are rarely clear, the study designs require much energy, money, talent, and sophistication, and the ability to publish findings in top-tier journals all the more compromised as a result.

Despite this, there is enormous potential for qualitative, quantitative, mixed-method, and even simulation research that isn’t being conducted into design thinking. This is partly because designers aren’t trained in these methods, but also because (I suspect) there is a reticence by many to opening up design thinking to scrutiny. Like anything on the hype cycle: design thinking is a victim of over-inflated claims of what it does, but that doesn’t necessarily mean it’s not offering a lot.

Design schools need to start training students in research methods beyond (in my opinion) the weak, simplistic approaches to ethnographic methods, surveys and interviews that are currently on offer. If design thinking is to be considered serious, it requires serious methodological training. Further, designers don’t need to be the most skilled researchers on the team: that’s what behavioural scientists bring. Bringing in the kind of expertise required to do the work necessary is important if design thinking is to grow beyond it’s ‘bullshit’ phase.

Evaluate impact

DesignSavingtheWorld

From Just Design by Christopher Simmons

Lastly, if we are going to claim that design is going to change the world, we need to back that up with evaluation data. Changes are, design thinking is changing the world, but maybe not in the ways we always think or hope, or in the quantity or quality we expect. Without evaluation, we simply don’t know.

Evaluation is about understanding how something operates in the world and what its impact is. Evaluators help articulate the value that something brings and can support innovators (design thinkers?) in making strategic decisions about what to do when to do it, and how to allocate resources.

The only time evaluation was used in my professional design training was when I mentioned it in class. That’s it. Few design programs of any discipline offer exposure to the methods and approaches of evaluation, which is unfortunate. Until last year, professional evaluators weren’t much better with most having limited exposure to design and design thinking.

That changed with the development of the Design Loft initiative that is now in its second year. The Design Loft was a pop-up conference designed and delivered by me (Cameron Norman) and co-developed with John Gargani, then President of the American Evaluation Association. The event provided a series of short-burst workshops on select design methods and tools as a means of orienting evaluators to design and how they might apply it to their work.

This is part of a larger effort to bring design and evaluation closer together. Design and design thinking offers an enormous amount of potential for innovation creation and evaluation brings the tools to assess what kind of impact those innovations have.

Getting bullish on design

I’ve witnessed firsthand how design (and the design thinking approach) has inspired people who didn’t think of themselves as creative, innovative, or change-makers do things that brought joy to their work. Design thinking can be transformative for those who are exposed to new ways of seeing problems, conceptualizing solutions, and building something. I’d hate to see that passion disappear.

That will happen once design thinking starts losing out to the next fad. Remember the lean methodology? How about Agile? Maybe the design sprint? These are distinct approaches, but share much in common with design thinking. Depending on who you talk to they might be the same thing. Blackbelts, unconferences, design jams, innovation labs, and beyond are all part of the hodgepodge of offerings competing for the attention of companies, governments, healthcare, and non-profits seeking to innovate.

What matters most is adding value. Whether this is through ‘design thinking’ or something else, what matters is that design — the creation of products, services, policies, and experiences that people value — is part of the innovation equation. It’s why I like the term ‘design thinking’ relative to others operating in the innovation development space simply because it acknowledges the practice of design in its name.

Designers rightfully can claim ‘design thinking’ as a concept that is — broadly defined –central, but far from complete to their work. Working with the very groups that have taken the idea of our design and applied it to business, education, and so many other sectors, it’s time those with a stake in seeing better design and better thinking about what we design flourish to take design thinking beyond its bullshit phase and make it bullish about innovation.

For those interested in evaluation and design, check out the 2017 Design Loft micro-conference taking place on Friday, November 10th within the American Evaluation Association’s annual convention in Washington, DC . Look for additional events, training and support for design thinking, evaluation and strategy by following @CenseLtd on Twitter with updates about the Design Loft and visiting Cense online. 

Image credits: Author. The ‘Design Will Save The World’ images were taken from the pages of Christopher Simmons’ book Just Design.

design thinkinginnovation

Design thinking is BS (and other harsh truths)

Ideas&Stairs

Design thinking continues to gain popularity as a means for creative problem-solving and innovation across business and social sectors. Time to take stock and consider what ‘design thinking’ is and whether it’s a real solution option for addressing complex problems, over-hyped BS, or both. 

Design thinking has pushed its way from the outside to the front and centre of discussions on innovation development and creative problem-solving. Books, seminars, certificate programs, and even films are being produced to showcase design thinking and inspire those who seek to become more creative in their approach to problem framing, finding and solving.

Just looking through the Censemaking archives will find considerable work on design thinking and its application to a variety of issues. While I’ve always been enthusiastic about design thinking’s promise, I’ve also been wary of the hype, preferring to use the term design over design thinking when possible.

What’s been most attractive about design thinking has been that it’s introduced the creative benefits of design to non-designers. Design thinking has made ‘making things’ more tangible to people who may have distanced themselves from making or stopped seeing themselves as creative. Design thinking has also introduced a new language that can help people think more concretely about the process of innovation.

Design thinking: success or BS?

We now see designers elevated to the C-suite — including the role of university president in the case of leading designer John Maeda — and as thought leaders in technology, education, non-profit work and business in large part because of design thinking. So it might have surprised many to see Natasha Jen, a partner at the prestigious design firm Pentagram, do the unthinkable in a recent public talk: trash design thinking.

Speaking at the 99u Conference in New York this past summer, Jen calls out what she sees as the ‘bullshit’ of design thinking and how it betrays much of the fundamentals of what makes good design.

One of Jen’s criticisms of design thinking is how it involves the absence of what designers call ‘crit’: the process of having peers — other skilled designers — critique design work early and often. While design thinking models typically include some form of ‘evaluation’ in them, this is hardly a rigorous process. There are few guidelines for how to do it, how to deliver feedback and little recognition of who is best able to deliver the crit to peers (there are even guides for those who don’t know about the critique process in design). It’s not even clear who the best ‘peers’ are for such a thing.

The design thinking movement has emphasized how ‘everyone is a designer.’ This has the positive consequences of encouraging creative engagement in innovation from everyone, increasing the pool of diverse perspectives that can be brought to bear on a topic. What it ignores is that the craft of design involves real skill and just as everyone can dance or sing, not everyone can do it well. What has been lost in much of the hype around design thinking is the respect for craft and its implications, particularly in terms of evaluation.

Evaluating design thinking’s impact

When I was doing my professional design training I once got into an argument* with a professor who said: “We know design thinking works“. I challenged back: “Do we? How?” To which he responded: “Of course we do, it just does — look around.” (pointing to the room of my fellow students presumably using ‘design thinking’ in our studio course).

End of discussion.

Needless to say, the argument was — in his eyes — about him being right and me being a fool for not seeing the obvious. For me, it was about the fact that, while I believed in the power of the approach that was loosely called ‘design thinking’ offered something better than the traditional methods of addressing many complex challenges, I couldn’t say for sure that it ‘works’ and does ‘better’ than the alternatives. It felt like he was saying hockey is better than knitting.

One of the reasons we don’t know is that solid evaluation isn’t typically done in design. The criteria that designers typically use is client satisfaction with the product given the constraints (e.g., time, budget, style, user expectations). If a client says: “I love it!” that’s about all that matters.

Another problem is that design thinking is often used to tackle more complex challenges for which there may be inadequate examples to compare. We are not able to use a randomized controlled trial, the ‘gold-standard’ research approach, to test whether design thinking is better than ‘non-design thinking.’ The result is that we don’t really know what design thinking’s impact is in the products, services, and processes that it is used to create or at least enough to compare it other ways of working.

Showing the work

In grade school math class it wasn’t sufficient to arrive at an answer and simply declare it without showing your work. The broad field of design (and the practice of design thinking) emphasizes developing and testing prototypes, but ultimately it is the final product that is assessed. What is done on the way to the final product is rarely given much, if any attention. Little evaluation is done on the process used to create a design using design thinking (or another approach).

The result of this is that we have little idea of the fidelity of implementation of a ‘model’ or approach when someone says they used design thinking. There is hardly any understanding of the dosage (amount), the techniques, the situations and the human factors (e.g., skill level, cooperation, openness to ideas, personality, etc..) that contribute to the designed product and little of the discussion in design reports are made of such things.

Some might argue that such rigorous attention to these aspects of design takes away from the ‘art’ of design or that it is not amenable to such scrutiny. While the creative/creation process is not a science, that doesn’t mean it can’t be observed and documented. It may be that comparative studies are impractical, but how do we know if we don’t try? What processes like the ‘crit’ does is open creators — teams or individuals — to feedback, alternative perspectives and new ideas that could prevent poor or weak ideas from moving forward.

Bringing evaluation into the design process is a way to do this.

Going past the hype cycle

Gartner has popularized the concept of the hype cycle, which illustrates how ‘hot’ ideas, technologies and other innovations get over-sold, under-appreciated and eventually adopted in a more realistic manner relative to their impact over time.

 

1200px-Gartner_Hype_Cycle.svg.png

Gartner Hype Cycle (source: Wikimedia Commons)

 

Design thinking is most likely somewhere past the peak of inflated expectations, but still near the top of the curve. For designers like Natasha Jen, design thinking is well into the Trough of Disillusionment (and may never escape). Design thinking is currently stuck in its ‘bullshit’ phase and until it embraces more openness into the processes used under its banner, attention to the skill required to design well, and evaluation of the outcomes that design thinking generates, outspoken designers like Jen will continue to be dissatisfied.

We need people like Jen involved in design thinking. The world could benefit from approaches to critical design that produces better, more humane and impactful products and services that benefit more people with less impact on the world. We could benefit greatly from having more people inspired to create and open to sharing their experience, expertise and diverse perspectives on problems. Design thinking has this promise if it open to applying some its methods to itself.

*argument implies that the other person was open to hearing my perspective, engage in dialogue, and provide counter-points to mine. This was not the case.

If you’re interested in learning more about what an evaluation-supported, critical, and impactful approach to design and design thinking could look like for your organization or problem, contact Cense and see how they can help you out. 

Image Credit: Author

psychologysystems thinking

Smart goals or better systems?

IMG_1091.jpg

If you’re working toward some sort of collective goals — as an organization, network or even as an individual — you’ve most likely been asked to use SMART goal setting to frame your task. While SMART is a popular tool for management consultants and scholars, does it make sense when you’re looking to make inroads on complex, unique or highly volatile problems or is the answer in the systems we create to advance goals in the first place?  

Goal setting is nearly everywhere.

Globally we had the UN-backed Millennium Development Goals and now have the Sustainable Development Goals and a look at the missions and visions of most corporations, non-profits, government departments and universities and you will see language that is framed in terms of goals, either explicitly or implicitly.

A goal for this purposes is:

goal |ɡōl| noun:  the object of a person’s ambition or effort; an aim or desired result the destination of a journey

Goal setting is the process of determining what it is that you seek to achieve and usually combined with mapping some form of strategy to achieve the goal. Goals can be challenging on their own when a single person is determining what it is that they want, need or feel compelled to do, even more so when aggregated to the level of the organization or a network.

How do you keep people focused on the same thing?

A look at the literature finds a visible presence of one approach: setting SMART goals. SMART goals reflect an acronym that stands for Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely in some examples). The origin of SMART has been traced back to an article in the 1981 issue of the AMA’s journal Management Review by George Doran (PDF). In that piece, Doran comments how unpleasant it is to set objectives and that this is one of the reasons organizations resist it. Yet, in an age where accountability is held in high regard the role of the goal is not only strategic, but operationally critical to attracting and maintaining resources.

SMART goals are part of a larger process called performance management, which is a means by enhancing collective focus and alignment of individuals within an organization . Dartmouth College has a clearly articulated explanation of how goals are framed within the context of performance management:

” Performance goals enable employees to plan and organize their work in accordance with achieving predetermined results or outcomes. By setting and completing effective performance goals, employees are better able to:

  • Develop job knowledge and skills that help them thrive in their work, take on additional responsibilities, or pursue their career aspirations;
  • Support or advance the organization’s vision, mission, values, principles, strategies, and goals;
  • Collaborate with their colleagues with greater transparency and mutual understanding;
  • Plan and implement successful projects and initiatives; and
  • Remain resilient when roadblocks arise and learn from these setbacks.”

Heading somewhere, destination unknown

Evaluation professionals and managers alike love SMART goals and performance measurement. What’s not to like about something that specifically outlines what is to be done in detail, the date its required by, and in a manner that is achievable? It’s like checking off all the boxes in your management performance chart all at once! Alas, the problems with this approach are many.

Specific is pretty safe, so we won’t touch that. It’s good to know what you’re trying to achieve.

But what about measurable? This is what evaluators love, but what does it mean in practice? Metrics and measures reflect a certain type of evaluative approach and require the kind of questions, data collection tools and data to work effectively. If the problem being addressed isn’t something that lends itself to quantification using measures or data that can easily define a part of an issue, then measurement becomes inconclusive at best, useless at worst.

What if you don’t know what is achievable? This might be because you’ve never tried something before or maybe the problem set has never existed before now.

How do you know what realistic is? This is tricky because, as George Bernard Shaw wrote in Man and Superman:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

This issue of reasonableness is an important one because innovation, adaptation and discovery are not able reason, but aspiration and hope. Were it for reasonableness, we might have never achieved much of what we’ve set out to accomplish in terms of being innovative, adaptive or creative.

Reasonableness is also the most dangerous for those seeking to make real change and do true innovation. Innovation is not often reasonable, nor are the asks ‘reasonable.’ Most social transformations were not because of reasonable. Women’s rights to vote  or The rights of African Americans to be recognized and treated as human beings in the United States are but two examples from the 20th century that are able lack of ‘reasonableness’.

Lastly, what if you have no idea what the timeline for success is? If you’ve not tackled this before, are working on a dynamic problem, or have uncertain or unstable resources it might be impossible to say how long something will take to solve.

Rethinking goals and their evaluation

One of the better discussions on goals, goal setting and the hard truths associated with what it means to pursue a goal is from James Clear, who draws on some of the research on strategy and decision making to build his list of recommendations. Clear’s summary pulls together a variety of findings that show how individuals construct goals and seek to achieve them and the results suggest that the problem is less about the strategy used to reach a goal, but more on the goals themselves.

What is most relevant for organizations is the concept of ‘rudders and oars‘, which is about creating systems and processes for action and less on the goal itself. In complex systems, our ability to exercise control is highly variable and constrained and goals provide an illusory sense that we have control. So either we fail to achieve our goals or we set goals that we can achieve, which may not be the most important thing we aim for. We essentially rig our system to achieve something that might be achievable, but utterly not important.

Drawing on this work, we are left to re-think goals and commit to the following:

  1. Commit to a process, not a goal
  2. Release the need for immediate results
  3. Build feedback loops
  4. Build better systems, not better goals

To realize this requires an adaptive approach to strategy and evaluation where the two go hand-in-hand and are used systemically. It means pushing aside and rejecting more traditional performance measurement models for individuals and organizations and developing more fine-tuned, custom evaluative approaches that link data to decisions and decisions to actions in an ongoing manner.

It means thinking in systems, about systems and designing for ways to do both on an ongoing, not episodic manner.

The irony is, by minimizing or rejecting the use of goals, you might better achieve a goal of a more impactful, innovative, responsive and creative organization with a real mission for positive change.

 

 

Image credit: Author

 

innovationpsychologysystems thinking

Decoupling creators from their creations

IMG_0859

The world is transformed by creators — the artists, the innovators, the leaders — and their creations are what propel change and stand in the way of it. Change can be hard and its made all the more so when we fail to decouple the creator from the created, inviting resistance rather than enticing better creations. 

If you want to find the hotspot for action or inaction in a human system, look at what is made in that system. Human beings defend, promote and attend to what they made more than anything. Just watch.

Children will rush to show you what they made: a sandcastle, a picture, a sculpture, a…whatever it is they are handing you. Adults don’t grow out of that: we are just big kids. Adults are just more subtle in how we promote and share what we do, but we still place an enormous amount of psychological energy on our creations. Sometimes its our kids, our spouses (which is a relationship we created), our ideas, our way of life, our programs, policies or businesses. Even something like a consumer purchase is, in some ways, a reflection of us in that we are making an identity or statement with it.

Social media can feel like one big echo-chamber sometimes, but it’s that way because we often are so busy sharing our opinions that we’re not listening — focusing on what we made, not what others made. Social media can be so harsh because when we attack ideas — our 140 character creations sometimes — we feel as if we are being attacked. This is one of the reasons we are having such traumatized, distorted discourse in the public domain.

Creations vs creators

The problem with any type of change is that we often end up challenging both the creator and the creation at the same time. It’s saying that the creation needs to change and that can hurt the creator, even if done unintentionally.

The interest in failure in recent years is something I’ve been critical of, but a positive feature of it is that people are now talking more openly about what it means to not succeed and that is healthy. There are lots of flaws in the failure talk out there, mostly notably because it fails (no pun — even a bad one, intended… but let’s go with it) to decouple the creator from the creation.  In many respects, creators and their creations are intimately tied together as one is the raw material and vehicle for the other.

But when we need to apologize for, make amends because, or defend our creations constantly we are using a lot of energy. In organizational environments there is an inordinate amount of time strategizing, evaluating and re-visiting innovation initiatives, but the simple truth is that — as we will see below — it doesn’t make a lick of difference because we are confusing the creators with the creations and the systems in which they are creating.

Sometimes we need to view the creator and creation separately and that is all a matter of trust – and reality.

 

A matter of trust, a matter of reality

If you are part of a team that has been tasked with addressing an issue and given a mandate to find and possibly create a solution, why should it matter what you produce? That seems like an odd question or statement.

At first it seems obvious: for accountability, right?

But consider this a little further.

If a team has been given a charge, it presumably is the best group that is available given the time and resources available. Maybe there are others more suited or talented to the job, perhaps there is a world expert out there who could help, but because of time, money, logistics or some combination of reasons in a given situation that isn’t possible. We are now dealing with what is, not what we wish to be. This is sometimes called the real world. 

If that team does not have the skills or talent to do it, why is it tasked with the problem? If those talents and skills are unknown and there is no time, energy or commitment  — or means — to assess that in practice then you are in a truly innovative space: let’s see what happens.

In this case, accountability is simply a means of exploring how something is made, what is learned along the way, and assessing what kind of products are produced from that, knowing that there is no real way to determine it’s comparative merit, significance or worth — the hallmark tenets of evaluation. It’s experimental and there is no way to fail, except to fail to attend and learn from it.

This is a matter of trust. It’s about trusting that you’re going to do something useful with what you have and that’s all. The right / wrong debate makes no sense because, if you’re dealing with reality as we discussed above there are no other options aside from not doing something. So why does failure have to come into it?

This is about trusting creators to create. It has nothing to do with what they create because, if you’ve selected a group to do something that only they are able to do, for reasons mentioned above, it has nothing to do with their creation.

Failing at innovation

The real failure we speak of might be failing to grasp reality, failing to support creative engagement in the workplace, and failing to truly innovate. These are products of what happens when we task individuals, groups, units or some other entity within our organizations, match them with systems that have no known means forward and provide them with no preparation, tools, or intelligence about the problem to support doing what they are tasked with. The not knowing part of the problem is not uncommon, particularly in innovative spaces. It’s nothing to sneer at, rather something that is a true learning opportunity. But we need to call it as it is, not what we want it or pretend it to be.

My colleague Hallie Preskill from FSG is big on learning and is not averse to rolling her eyes when people speak of it, because it’s often used so flippantly and without thought. True learning means paying attention, giving time and focus to what material — experience, data, reflections, goals and contexts — is available, and in . Learning has a cost and it has many myths attached to it. It’s difficulty is why many simply don’t do it. In truth, many are not as serious about really learning, but talking about learning. This is what Hallie sees a lot.

The material for learning is what these innovators, these creators, are producing so if we are valuing creation and innovation we need to pay attention to this and entice creators to continue to generate more quality ‘content’ for us to learn from and worry less about what these creators produce when we task them with innovation missions that have no chance to ‘succeed’ just as they have no chance to ‘fail’

A poor question leads us to poor answers.

Consider what we ask of our innovators and innovation and you’ll see that, if we want more and better creators and if we want more and better creations we need to see them in a new light and create the culture that sees them both for what they are, not what we want them to be.

Image credit: Author

complexitysocial systems

Time. Care. Attention.

midocowall_snapseed

A read through a typical management or personal improvement feed will reveal near infinite recommendations for steps you can take to improve your organization or self. Three words tend to be absent from this content stream and they are what take seemingly simple recommendations and navigate them through complexity: time, care, and attention.

Embedded within the torrent of content on productivity, innovation and self-development is a sense of urgency reflected in ‘top ten lists’,  ‘how tos’ and ‘hacks’ that promise ways to make your life and the world better, faster; hallmark features of what has become our modern, harried age.  These lists and posts are filled with well-intentioned strategies that are sure to get ‘liked’, ‘shared’ and ‘faved’, but for which there might be scant evidence for their effect.

Have you seen the research on highly productive people and organizations and their approaches, tools and strategies that speak to your specific circumstances? Probably not. The reason? There’s a paucity of research out there that points to specifics, while there is much on more generalized strategies. The problem is that you and I operate in the world specific to us, not generalized to the world. It’s why population health data is useful to understanding how a particular condition or issue manifests across a society, but is relatively poor at predicting individual outcomes.

Whether general or specific, three key qualities appear to be missing from much of the discussion and they might be the reason so little of what is said translates well into what is done: time, care, attention.

Time

When words and concepts like lean startup, rapid cycle prototyping, and quick pivoting dominate discussion of productivity and innovation it is easy to find our focus in speed. Yet, there are so many reasons to consider time and space as much as speed. Making the space to see things in a bigger picture allows us to act on what is important, not just what is urgent. This requires space — literal and figurative — within our everyday practice to do. Time allows us to lower that emotional drive to focus on more things at once, potentially seeing new patterns and different connections than had we rushed headlong into what appeared to be the most obvious issue at first look.

If we are seeking change to last, why would we not design something that takes time to prepare, deliver and sustain? Our desire and impetus to gain speed comes at the cost of longevity in many cases. This isn’t to suggest that a rapid-fire initiative can’t produce long-term results. The space race between the United States and Russia in the 1950’s and 60’s proves the long-term viability of short-term bursts of creative energy, but this might be an exception rather than the rule. Consider the timelessness of many classical architectural designs and how they were build with an idea that they would last, because they were designed without the sense that time was passing quickly. They were built to last.

Care

Care is consideration applied to doing something well. It is tied to a number of other c-words like competence. Those who are applying their skills to an issue require, acquire and develop a level of competence in doing something. Tackling complex social and organizational problems requires a level of competence that comes with time and attention, hence the research that suggests mastery may take as much as 10,000 hours of sustained, careful, deliberative practice to achieve. In an age of speed, this isn’t something that’s easily dealt with. Fast-tracked learning isn’t as possible as we think.

Care might also substitute for another c-word: craft. Craft is about building competence, practicing it, and attending to the materials and products generated through it. It’s not about mass production, but careful production.

Care is the application of focus and another c-word: compassion. Compassion is a response to suffering, which might be your own, that of your organization or community, or something for the world. Compassion is that motivational force aimed at helping alleviate those things that produce suffering and includes empathy, concern, kindness, tolerance, and sensitivity and is the very thing that translates our intentions and desires for change into actions that are likely to be received. We react positively to those who show compassion towards us and has been shown to be a powerful driver for positive change in flourishing organizations (PDF).

And isn’t flourishing what we’re all about? Why would we want anything less?

Attention

The third and related factor to the others is attention. Much has been written here and elsewhere about the role of mindfulness as a means of enhancing focus and focus on the right things: it’s a cornerstone of a socially innovative organization. Mindfulness has benefits of clearing away ‘noise’ and allowing more clear attention toward the data (e.g., experience, emotion, research data) we’re presented with that is the raw material for decision making. It’s an essential element of developmental evaluation and sensemaking.

Taken together, time, care and attention are the elements that not only allow us to see and experience more of our systems, but they allow us to better attend to what is important, not just what is urgent. They are a means to ensuring we do the right things, not the wrong things, righter.

In a world where there is more of almost everything determining what is most important, most relevant and most impactful has never been more important and while there is a push for speed, for ‘more’, there’s — paradoxically — never been a greater need to slow down, reduce and focus.

Thank you, reader, for your time, care and attention. You’ve given some of the most valuable things you have.

Image credit: Author

complexityinnovationpsychology

Jaded: Whether You Are a Plant or a Stone Makes All the Difference

2252329895_b07c43f404_o

An attempt to innovate – do something new to produce value — is always fraught with risk and a high likelihood that things won’t go as planned, which can leave people jaded toward future efforts. Whether that metaphor of jade is one of a rock (static) or a plant (growth) makes all the difference. 

Innovation is hot. Innovation is necessary. Innovation is your competitive advantage. Innovate or die.

You’ve probably heard one or all of these phrases or one of the myriad variants of them out there. Innovation is a hot word. To innovate is to transform new thinking into new value, but it is used euphemistically to represent all kinds of ‘hot’ things without appropriate framing. It’s not just doing something different, it’s about producing something new that improves on the situation at hand, even if the solution might actually be an old idea re-introduced.

A recent article for the online version of Harvard Business Review suggests that many companies are just giving up, ceding the ‘innovation’ space to large firms with a reputation for innovation. Why? One of the reasons cited is that the developing social and technological change has created a situation where “many firms seem to be unable to keep up with the pace at which this development is unfolding.”

The painful experience of failure

Another reason might be the problem of failure. Failure has become another cool word in the language of business and social innovation (even, government) to the point of being fetishized as something noble. The issue with failure is not just accepting that it can happen, but learning from it and acting on that learning. It also means understanding what failure is and whether an outcome is even best described in terms like “success” and “failure” . Too often in innovation, particularly social innovation, we actually don’t know what success looks like so how is it that we can use the term failure so readily?

Failure is a word with enormous negative cultural baggage. Despite all the positive rhetoric of failure, corporations, social enterprises and governments are judged on their ability to deliver what is expected of them. Expectations are really the key here. If you’re a corporation that promises to deliver a certain rate of return on investment over a specific time period, you’re going to be held to task for that. We can speak of failure positively all we’d like, but try explaining the ‘learning’ outcomes to a group of angry shareholders?

Politicians don’t get judged on their ability to manage complexity, they are judged by making and keeping promises — even if those promises are based on (overly) simplistic ways of viewing complex problems. As we entangle ourselves with more complex problems, the promise of a simple solution will be harder to come by. Yet, it’s that hope for the solution that is what ultimately gets us. As I once read in a newsletter advertising an online dating service in a very cheeky manner:

It’s not the rejection that kills you, it’s the hope.

It’s actually quite true. If you don’t expect to succeed, “failure” isn’t really that bad.

Lowered expectations, risk avoidance & path dependencies

When you’re jaded, you tend to lower your expectations. The analogy of online dating above is also an apt descriptor for ways in which lowered expectations changes the very game of innovation in real ways in people’s lives. As divorce rates approach 50%, it is becoming common that many people are starting over sometime in their 30’s and 40’s and trying, once again, to find love. What’s interesting in terms of dating is that, particularly if you’ve been dating a little, you face two big issues: 1) you’re a bit cautious about what you do or say because you know that things might not last and you want to conserve your energy and 2) you’ve become accustomed to the way you do things on your own.

The result might be less adventurousness, more conservative thinking about the choice of partner, a greater willingness to settle for what is, rather than what could be (risk avoidance). An established pattern of living might also predispose you to looking for partners who are a lot like you, which maintains a level of consistency (path dependence). An argument can be made that this is more about knowing yourself and your preferences than being set in your ways, but there is a fine line between that and resistance to change.

This is exactly what we see in organizations around innovation.

They have tried innovation before, it’s failed to deliver what they expected (because they probably set their expectations poorly, not realizing that the outcomes of innovation could be something other than they had designed for), and now don’t want to try. Or rather, they don’t want to try enough. This is why we see so many organizations trumpet themselves as innovative, when what they are really doing is the most basic, simplistic forms of innovation. Rather than a moonshot, they are looking to simply move the yardsticks just a little.

Plant vs stone

Jade is both a plant and a stone. A jade plant is a solid, semi-broad leafed plant that is well suited to dry climates and a variety of light situations, making it a great houseplant. It’s adaptive, easily transplantable and hearty. Jade, as a stone, is relatively soft and while it is also adaptable, once carved into a shape, it’s no longer going to change.

The jade / jaded metaphor is designed to consider the ways in which we approach developing our innovation potential. A jade plant is still firm, but flexible. It grows and changes over time, but isn’t as free flowing as others. The jade plant offers a useful metaphor for ensuring that lessons learned from past actions inform future strategy, but not to the point where the fear of risk calcifies the organization into a static state, unable to change.

A plant exists largely because it has a steady stream of nutrients, water, sunlight and a reasonable stability of growing conditions, yet conditions that can change and will change over time. This consistency as well as requisite variety (in systems terms) is what keeps a plant alive and thriving. The same is true for an organization. Ongoing, steady innovation, consistency over time and the occasional change in conditions to keep things on their toes (and used to adaptation) are all a part of what makes an organization or individual innovative. Build in a regular practice, become a mindful organization (or practitioner) and consider changes in the way you speak about innovation to yourself and others.

Bruce Lee would advocate that his students become like water. Innovators? They should become more like plants for that water.

Image Credit: Jade Plant by Andrew Rivett used under Creative Commons License. Thanks for sharing Andrew!