Elevating Design & Design Thinking

 

ElevateDesignLoft2.jpgDesign thinking has brought the language of design into popular discourse across different fields, but it’s failings threaten to undermine the benefits it brings if they aren’t addressed. In this third post in a series, we look at how Design (and Design Thinking) can elevate themselves above their failings and match the hype with real impact. 

In two previous posts, I called out ‘design thinkers’ to get the practice out of it’s ‘bullshit’ phase, characterized by high levels of enthusiastic banter, hype, and promotion and low evidence, evaluation or systematic practice.

Despite the criticism, it’s time for Design Thinking (and the field of Design more specifically) to be elevated beyond its current station. I’ve been critical of Design Thinking for years: its popularity has been helpful in some ways, problematic in others.  Others have, too. Bill Storage, writing in 2012 (now unavailable), said:

Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design.

Bruce Nussbaum, who helped popularize Design Thinking in the early 2000’s called it a ‘failed experiment’, seeking to promote the concept of Creative Intelligence instead. While many have called for Design Thinking to die, it’s not going to happen anytime soon. Since first publishing a piece on Design Thinking’s problems five years ago the practice has only grown. Design Thinking is going to continue to grow, despite its failings and that’s why it matters that we pay attention to it — and seek to make it better.

Lack of quality control, standardization or documentation of methods, and evidence of impact are among the biggest problems facing Design Thinking if it is to achieve anything substantive beyond generating money for those promoting it.

Giving design away, better

It’s hard to imagine that the concepts of personality, psychosis, motivation, and performance measurement from psychology were once unknown to most people. Yet, before the 1980’s, much of the public’s understanding of psychology was confined to largely distorted beliefs about Freudian psychoanalysis, mental illness, and rat mazes. Psychology is now firmly ensconced in business, education, marketing, public policy, and many other professions and fields. Daniel Kahneman, a psychologist, won the Nobel Prize in Economics in 2002 for his work applying psychological and cognitive science to economic decision making.

The reason for this has much to do with George Miller who, as President of the American Psychological Association, used his position to advocate that professional psychology ‘give away’ its knowledge to ensure its benefits were more widespread. This included creating better means of communicating psychological concepts to non-psychologists and generating the kind of evidence that could show its benefits.

Design Thinking is at a stage where we are seeing similar broad adoption beyond professional design to these same fields of business, education, the military and beyond. While there has been much debate about whether design thinking as practiced by non-designers (like MBA’s) is good for the field as a whole, there is little debate that its become popular just as psychology has.

What psychology did poorly is that it gave so much away that it failed to engage other disciplines enough to support quality adoption and promotion and, simultaneously, managed to weaken itself as newfound enthusiasts pursued training in these other disciplines. Now, some of the best psychological practice is done by social workers and the most relevant research comes from areas like organizational science and new ‘sub-disciplines’ like behavioural economics, for example.

Design Thinking is already being taught, promoted, and practiced by non-designers. What these non-designers often lack is the ‘crit’ and craft of design to elevate their designs. And what Design lacks is the evaluation, evidence, and transparency to elevate its work beyond itself.

So what next?

BalloonsEgypt.jpg

Elevating Design

As Design moves beyond its traditional realms of products and structures to services and systems (enabled partly by Design Thinking’s popularity) the implications are enormous — as are the dangers. Poorly thought-through designs have the potential to exacerbate problems rather than solve them.

Charles Eames knew this. He argued that innovation (which is what design is all about) should be a last resort and that it is the quality of the connections (ideas, people, disciplines and more) we create that determine what we produce and their impact on the world. Eames and his wife Ray deserve credit for contributing to the elevation of the practice of design through their myriad creations and their steadfast documentation of their work. The Eames’ did not allow themselves to be confined by labels such as product designer, interior designer, or artist. They stretched their profession by applying craft, learning with others, and practicing what they preached in terms of interdisciplinarity.

It’s now time for another elevation moment. Designers can no longer be satisfied with client approval as the key criteria for success. Sustainability, social impact, and learning and adaptation through behaviour change are now criteria that many designers will need to embrace if they are to operate beyond the fields’ traditional domains (as we are now seeing more often). This requires that designers know how to evaluate and study their work. They need to communicate with their clients better on these issues and they must make what they do more transparent. In short: designers need to give away design (and not just through a weekend design thinking seminar).

Not every designer must get a Ph.D. in behavioural science, but they will need to know something about that domain if they are to work on matters of social and service design, for example. Designers don’t have to become professional evaluators, but they will need to know how to document and measure what they do and what impact it has on those touched by their designs. Understanding research — that includes a basic understanding of statistics, quantitative and qualitative methods — is another area that requires shoring up.

Designers don’t need to become researchers, but they must have research or evaluation literacy. Just as it is becoming increasingly unacceptable that program designers from fields like public policy and administration, public health, social services, and medicine lack understanding of design principles, so is it no longer feasible for designers to be ignorant of proper research methods.

It’s not impossible. Clinical psychologists went from being mostly practitioners to scientist-practitioners. Professional social workers are now well-versed in research even if they typically focus on policy and practice. Elevating the field of Design means accepting that being an effective professional requires certain skills and research and evaluation are now part of that skill set.

DesignWindow2.jpeg

Designing for elevated design

This doesn’t have to fall on designers to take up research — it can come from the very people who are attracted to Design Thinking. Psychologists, physicians, and organizational scientists (among others) all can provide the means to support designers in building their literacy in this area.

Adding research courses that go beyond ethnography and observation to give design students exposure to survey methods, secondary data analysis, ‘big data’, Grounded Theory approaches, and blended models for data collection are all options. Bring the behavioural and data scientists into the curriculum (and get designers into the curriculum training those professionals).

Create opportunities for designers to do research, publish, and present their research using the same ‘crit’ that they bring to their designs. Just as behavioural scientists expose themselves to peer review of their research, designers can do the same with their research. This is a golden opportunity for an exchange of ideas and skills between the design community and those in the program evaluation and research domains.

This last point is what the Design Loft initiative has sought to do. Now in its second year, the Design Loft is a training program aimed at exposing professional evaluators to design methods and tools. It’s not to train them as designers, but to increase their literacy and confidence in engaging with Design. The Design Loft can do the same thing with designers, training them in the methods and tools of evaluation. It’s but one example.

In an age where interdisciplinarity is spoken of frequently this provides the means to practically do it and in a way that offers a chance to elevate design much like the Eames’ did, Milton Glaser did, and how George Miller did for psychology. The time is now.

If you are interested in learning more about the Design Loft initiative, connect with Cense. If you’re a professional evaluator attending the 2017 American Evaluation Association conference in Washington, the Design Loft will be held on Friday, November 10th

Image Credit: Author

Beyond Bullshit for Design Thinking

QuestionLook.jpg

Design thinking is in its ‘bullshit’ phase, a time characterized by wild hype, popularity and little evidence of what it does, how it does it, or whether it can possibly deliver what it promises on a consistent basis. If design thinking is to be more than a fad it needs to get serious about answering some important questions and going from bullshit to bullish in tackling important innovation problems and the time is now. 

In a previous article, I described design thinking as being in its BS phase and that it was time for it to move on from that. Here, I articulate things that can help us there.

The title of that original piece was inspired by a recent talk by Pentagram partner, Natasha Jen, where she called out design thinking as “bullshit.” Design thinking offers much to those who haven’t been given or taken creative license in their work before. Its offered organizations that never saw themselves as ‘innovative’ a means to generate products and services that extend beyond the bounds of what they thought was possible. While design thinking has inspired people worldwide (as evidenced by the thousands of resources, websites, meetups, courses, and discussions devoted to the topic) the extent of its impact is largely unknown, overstated, and most certainly oversold as it has become a marketable commodity.

The comments and reaction to my related post on LinkedIn from designers around the world suggest that many agree with me.

So now what? Design thinking, like many fads and technologies that fit the hype cycle, is beset with a problem of inflated expectations driven by optimism and the market forces that bring a lot of poorly-conceived, untested products supported by ill-prepared and sometimes unscrupulous actors into the marketplace. To invoke Natasha Jen: there’s a lot of bullshit out there.

But there is also promising stuff. How do we nurture the positive benefits of this overall approach to problem finding, framing and solving and fix the deficiencies, misconceptions, and mistakes to make it better?

Let’s look at a few things that have the potential to transform design thinking from an over-hyped trend to something that brings demonstrable value to enterprises.

Show the work

ShowtheWork.jpg

The journey from science to design is a lesson in culture shock. Science typically begins its journey toward problem-solving by looking at what has been done before whereas a designer typically starts with what they know about materials and craft. Thus, an industrial designer may have never made a coffee mug before, but they know how to build things that meet clients’ desires within a set of constraints and thus feel comfortable undertaking this job. This wouldn’t happen in science.

Design typically uses a simple criterion above all others to judge the outcomes of its work: Is the client satisfied? So long as the time, budget, and other requirements are met, the key is ensuring that the client likes the product. Because this criterion is so heavily weighted on the outcome, designers often have little need to capture or share how they arrived at the outcome, just that they do it. Designers may also be reluctant to share this because this is their competitive advantage so there is an industry-specific culture that prevents people from opening their process to scrutiny.

Science requires that researchers open up their methods, tools, observations, and analytical strategy to view for others. The entire notion of peer review — which has its own set of flaws — is predicated on the notion that other qualified professionals can see how a solution was derived and provide comment on it. Scientific peer review is typically geared toward encouraging replication, however, it is also to allow others to assess the reasonableness of the claims. This is the critical part of peer review that requires scientists to adhere to a certain set of standards and show their work.

As design moves into a more social realm, designing systems, services, and policies for populations for whom there is no single ‘client’ and many diverse users, the need to show the work becomes imperative. Showing the work also allows for others to build the method. For example, design thinking speaks of ‘prototyping’, yet without a clear sense of what is prototyped, how it is prototyped, what means of assessing the value of the prototype is, and what options were considered (or discarded) in developing the prototype, it is impossible to tell if this was really the best idea of many or the one decided most feasible to try.

This might not matter for a coffee cup, but it matters a lot if you are designing a social housing plan, a transportation system, or a health service. Designers can borrow from scientists and become better at documenting what they do along the way, what ideas are generated (and dismissed), how decisions are made, and what creative avenues are explored along the route to a particular design choice. This not only improves accountability but increases the likelihood of better input and ‘crit’ from peers. This absence of ‘crit’ in design thinking is among the biggest ‘bullshit’ issues that Natasha Jen spoke of.

Articulate the skillset and toolset

Creating2.jpeg

What does it take to do ‘design thinking’? The caricature is that of the Post-it Notes, Lego, and whiteboards. These are valuable tools, but so are markers, paper, computer modeling software, communication tools like Slack or Trello, cameras, stickers…just about anything that allows data, ideas, and insights to be captured, organized, visualized, and transformed.

Using these tools also takes skill (despite how simple they are).

Facilitation is a key design skill when working with people and human-focused programs and services. So is conflict resolution. The ability to negotiate, discuss, sense-make, and reflect within the context of a group, a deadline, and other constraints is critical for bringing a design to life. These skills are not just for designers, but they have to reside within a design team.

There are other skills related to shaping aesthetics, manufacturing, service design, communication, and visual representation that can all contribute to a great design team and these need to be articulated as part of a design thinking process. Many ‘design thinkers’ will point to the ABC Nightline segment that aired in 1999 titled “The Deep Dive” as their first exposure to ‘design thinking’. It is also what thrust the design firm IDEO into the spotlight who, more than any single organization, is credited with popularizing design thinking through their work.

What gets forgotten when people look at this program where designers created a shopping cart in just a few days was that IDEO brought together a highly skilled interdisciplinary team that included engineers, business analysts, and a psychologist. Much of the design thinking advocacy work out there talks about ‘diversity’, but that matters only when you have a diversity of perspectives, but also technical and scholarly expertise to make use of those perspectives. How often are design teams taking on human service programs aimed at changing behaviour without any behavioural scientists involved? How often are products created without any care to the aesthetics of the product because there wasn’t a graphic designer or artist on the team?

Does this matter if you’re using design thinking to shape the company holiday party? Probably not. Does it if you are shaping how to deliver healthcare to an underserved community? Yes.

Design thinking can require general and specific skillsets and toolsets and these are not generic.

Develop theory

DesignerUserManufacturer.jpg

A theory is not just the provenance of eggheaded nerds and something you had to endure in your college courses on social science. It matters when it’s done well. Why? As Kurt Lewin, one of the most influential applied social psychologists of the 20th century said: “There is nothing so practical as a good theory.”

A theory allows you to explain why something happens, how causal connections may form, and what the implications of specific actions are in the world. They are ideas, often grounded in evidence and other theories, about how things work. Good theories can guide what we do and help us focus what we need to pay attention to. They can be wrong or incomplete, but when done well a theory provides us the means to explain what happens and can happen. Without it, we are left trying to explain the outcomes of actions and have little recourse for repeating, correcting, or redesigning what we do because we have no idea why something happened. Rarely — in human systems — is evidence for cause-and-effect so clear cut without some theorizing.

Design thinking is not entirely without theory. Some scholars have pulled together evidence and theory to articulate ways to generate ideas, decision rules for focusing attention, and there are some well-documented examples for guiding prototype development. However, design thinking itself — like much of design — is not strong on theory. There isn’t a strong theoretical basis to ascertain why something produces an effect based on a particular social process, or tool, or approach. As such, it’s hard to replicate such things, determine where something succeeded or where improvements need to be made.

It’s also hard to explain why design thinking should be any better than anything else that aims to enkindle innovation. By developing theory, designers and design thinkers will be better equipped to advance its practice and guide the focus of evaluation. Further, it will help explain what design thinking does, can do, and why it might be suited (or ill-suited) to a particular problem set.

It also helps guide the development of research and evaluation scholarship that will build the evidence for design thinking.

Create and use evidence

Research2Narrow.jpg

Jeanne Leidtka and her colleagues at the Darden School of Business have been among the few to conduct systematic research into the use of design thinking and its impact. The early research suggests it offers benefit to companies and non-profits seeking to innovate. This is a start, but far more research is needed by more groups if we are to build a real corpus of knowledge to shape practice more fully. Leidtka’s work is setting the pace for where we can go and design thinkers owe her much thanks for getting things moving. It’s time for designers, researchers and their clients to join her.

Research typically begins with taking ‘ideal’ cases to ensure sufficient control, influence and explanatory power become more possible. If programs are ill-defined, poorly resourced, focus on complex or dynamic problems, have no clear timeline for delivery or expected outcomes, and lack the resources or leadership that has them documenting the work that is done, it is difficult to impossible to tell what kind of role design thinking plays amid myriad factors.

An increasing amount of design thinking — in education, international development, social innovation, public policy to name a few domains of practice — is applied in this environmental context. This is the messy area of life where research aimed at looking for linear cause-and-effect relationships and ‘proof’ falters, yet it’s also where the need for evidence is great. Researchers tend to avoid looking at these contexts because the results are rarely clear, the study designs require much energy, money, talent, and sophistication, and the ability to publish findings in top-tier journals all the more compromised as a result.

Despite this, there is enormous potential for qualitative, quantitative, mixed-method, and even simulation research that isn’t being conducted into design thinking. This is partly because designers aren’t trained in these methods, but also because (I suspect) there is a reticence by many to opening up design thinking to scrutiny. Like anything on the hype cycle: design thinking is a victim of over-inflated claims of what it does, but that doesn’t necessarily mean it’s not offering a lot.

Design schools need to start training students in research methods beyond (in my opinion) the weak, simplistic approaches to ethnographic methods, surveys and interviews that are currently on offer. If design thinking is to be considered serious, it requires serious methodological training. Further, designers don’t need to be the most skilled researchers on the team: that’s what behavioural scientists bring. Bringing in the kind of expertise required to do the work necessary is important if design thinking is to grow beyond it’s ‘bullshit’ phase.

Evaluate impact

DesignSavingtheWorld

From Just Design by Christopher Simmons

Lastly, if we are going to claim that design is going to change the world, we need to back that up with evaluation data. Changes are, design thinking is changing the world, but maybe not in the ways we always think or hope, or in the quantity or quality we expect. Without evaluation, we simply don’t know.

Evaluation is about understanding how something operates in the world and what its impact is. Evaluators help articulate the value that something brings and can support innovators (design thinkers?) in making strategic decisions about what to do when to do it, and how to allocate resources.

The only time evaluation was used in my professional design training was when I mentioned it in class. That’s it. Few design programs of any discipline offer exposure to the methods and approaches of evaluation, which is unfortunate. Until last year, professional evaluators weren’t much better with most having limited exposure to design and design thinking.

That changed with the development of the Design Loft initiative that is now in its second year. The Design Loft was a pop-up conference designed and delivered by me (Cameron Norman) and co-developed with John Gargani, then President of the American Evaluation Association. The event provided a series of short-burst workshops on select design methods and tools as a means of orienting evaluators to design and how they might apply it to their work.

This is part of a larger effort to bring design and evaluation closer together. Design and design thinking offers an enormous amount of potential for innovation creation and evaluation brings the tools to assess what kind of impact those innovations have.

Getting bullish on design

I’ve witnessed firsthand how design (and the design thinking approach) has inspired people who didn’t think of themselves as creative, innovative, or change-makers do things that brought joy to their work. Design thinking can be transformative for those who are exposed to new ways of seeing problems, conceptualizing solutions, and building something. I’d hate to see that passion disappear.

That will happen once design thinking starts losing out to the next fad. Remember the lean methodology? How about Agile? Maybe the design sprint? These are distinct approaches, but share much in common with design thinking. Depending on who you talk to they might be the same thing. Blackbelts, unconferences, design jams, innovation labs, and beyond are all part of the hodgepodge of offerings competing for the attention of companies, governments, healthcare, and non-profits seeking to innovate.

What matters most is adding value. Whether this is through ‘design thinking’ or something else, what matters is that design — the creation of products, services, policies, and experiences that people value — is part of the innovation equation. It’s why I like the term ‘design thinking’ relative to others operating in the innovation development space simply because it acknowledges the practice of design in its name.

Designers rightfully can claim ‘design thinking’ as a concept that is — broadly defined –central, but far from complete to their work. Working with the very groups that have taken the idea of our design and applied it to business, education, and so many other sectors, it’s time those with a stake in seeing better design and better thinking about what we design flourish to take design thinking beyond its bullshit phase and make it bullish about innovation.

For those interested in evaluation and design, check out the 2017 Design Loft micro-conference taking place on Friday, November 10th within the American Evaluation Association’s annual convention in Washington, DC . Look for additional events, training and support for design thinking, evaluation and strategy by following @CenseLtd on Twitter with updates about the Design Loft and visiting Cense online. 

Image credits: Author. The ‘Design Will Save The World’ images were taken from the pages of Christopher Simmons’ book Just Design.

Design thinking is BS (and other harsh truths)

Ideas&Stairs

Design thinking continues to gain popularity as a means for creative problem-solving and innovation across business and social sectors. Time to take stock and consider what ‘design thinking’ is and whether it’s a real solution option for addressing complex problems, over-hyped BS, or both. 

Design thinking has pushed its way from the outside to the front and centre of discussions on innovation development and creative problem-solving. Books, seminars, certificate programs, and even films are being produced to showcase design thinking and inspire those who seek to become more creative in their approach to problem framing, finding and solving.

Just looking through the Censemaking archives will find considerable work on design thinking and its application to a variety of issues. While I’ve always been enthusiastic about design thinking’s promise, I’ve also been wary of the hype, preferring to use the term design over design thinking when possible.

What’s been most attractive about design thinking has been that it’s introduced the creative benefits of design to non-designers. Design thinking has made ‘making things’ more tangible to people who may have distanced themselves from making or stopped seeing themselves as creative. Design thinking has also introduced a new language that can help people think more concretely about the process of innovation.

Design thinking: success or BS?

We now see designers elevated to the C-suite — including the role of university president in the case of leading designer John Maeda — and as thought leaders in technology, education, non-profit work and business in large part because of design thinking. So it might have surprised many to see Natasha Jen, a partner at the prestigious design firm Pentagram, do the unthinkable in a recent public talk: trash design thinking.

Speaking at the 99u Conference in New York this past summer, Jen calls out what she sees as the ‘bullshit’ of design thinking and how it betrays much of the fundamentals of what makes good design.

One of Jen’s criticisms of design thinking is how it involves the absence of what designers call ‘crit’: the process of having peers — other skilled designers — critique design work early and often. While design thinking models typically include some form of ‘evaluation’ in them, this is hardly a rigorous process. There are few guidelines for how to do it, how to deliver feedback and little recognition of who is best able to deliver the crit to peers (there are even guides for those who don’t know about the critique process in design). It’s not even clear who the best ‘peers’ are for such a thing.

The design thinking movement has emphasized how ‘everyone is a designer.’ This has the positive consequences of encouraging creative engagement in innovation from everyone, increasing the pool of diverse perspectives that can be brought to bear on a topic. What it ignores is that the craft of design involves real skill and just as everyone can dance or sing, not everyone can do it well. What has been lost in much of the hype around design thinking is the respect for craft and its implications, particularly in terms of evaluation.

Evaluating design thinking’s impact

When I was doing my professional design training I once got into an argument* with a professor who said: “We know design thinking works“. I challenged back: “Do we? How?” To which he responded: “Of course we do, it just does — look around.” (pointing to the room of my fellow students presumably using ‘design thinking’ in our studio course).

End of discussion.

Needless to say, the argument was — in his eyes — about him being right and me being a fool for not seeing the obvious. For me, it was about the fact that, while I believed in the power of the approach that was loosely called ‘design thinking’ offered something better than the traditional methods of addressing many complex challenges, I couldn’t say for sure that it ‘works’ and does ‘better’ than the alternatives. It felt like he was saying hockey is better than knitting.

One of the reasons we don’t know is that solid evaluation isn’t typically done in design. The criteria that designers typically use is client satisfaction with the product given the constraints (e.g., time, budget, style, user expectations). If a client says: “I love it!” that’s about all that matters.

Another problem is that design thinking is often used to tackle more complex challenges for which there may be inadequate examples to compare. We are not able to use a randomized controlled trial, the ‘gold-standard’ research approach, to test whether design thinking is better than ‘non-design thinking.’ The result is that we don’t really know what design thinking’s impact is in the products, services, and processes that it is used to create or at least enough to compare it other ways of working.

Showing the work

In grade school math class it wasn’t sufficient to arrive at an answer and simply declare it without showing your work. The broad field of design (and the practice of design thinking) emphasizes developing and testing prototypes, but ultimately it is the final product that is assessed. What is done on the way to the final product is rarely given much, if any attention. Little evaluation is done on the process used to create a design using design thinking (or another approach).

The result of this is that we have little idea of the fidelity of implementation of a ‘model’ or approach when someone says they used design thinking. There is hardly any understanding of the dosage (amount), the techniques, the situations and the human factors (e.g., skill level, cooperation, openness to ideas, personality, etc..) that contribute to the designed product and little of the discussion in design reports are made of such things.

Some might argue that such rigorous attention to these aspects of design takes away from the ‘art’ of design or that it is not amenable to such scrutiny. While the creative/creation process is not a science, that doesn’t mean it can’t be observed and documented. It may be that comparative studies are impractical, but how do we know if we don’t try? What processes like the ‘crit’ does is open creators — teams or individuals — to feedback, alternative perspectives and new ideas that could prevent poor or weak ideas from moving forward.

Bringing evaluation into the design process is a way to do this.

Going past the hype cycle

Gartner has popularized the concept of the hype cycle, which illustrates how ‘hot’ ideas, technologies and other innovations get over-sold, under-appreciated and eventually adopted in a more realistic manner relative to their impact over time.

 

1200px-Gartner_Hype_Cycle.svg.png

Gartner Hype Cycle (source: Wikimedia Commons)

 

Design thinking is most likely somewhere past the peak of inflated expectations, but still near the top of the curve. For designers like Natasha Jen, design thinking is well into the Trough of Disillusionment (and may never escape). Design thinking is currently stuck in its ‘bullshit’ phase and until it embraces more openness into the processes used under its banner, attention to the skill required to design well, and evaluation of the outcomes that design thinking generates, outspoken designers like Jen will continue to be dissatisfied.

We need people like Jen involved in design thinking. The world could benefit from approaches to critical design that produces better, more humane and impactful products and services that benefit more people with less impact on the world. We could benefit greatly from having more people inspired to create and open to sharing their experience, expertise and diverse perspectives on problems. Design thinking has this promise if it open to applying some its methods to itself.

*argument implies that the other person was open to hearing my perspective, engage in dialogue, and provide counter-points to mine. This was not the case.

If you’re interested in learning more about what an evaluation-supported, critical, and impactful approach to design and design thinking could look like for your organization or problem, contact Cense and see how they can help you out. 

Image Credit: Author

Complex problems and social learning

6023780563_12f559a73f_o.jpg

Adaptation, evolution, innovation, and growth all require that we gain new knowledge and apply it to our circumstances, or learn. While much focus in education is on how individuals attend, process and integrate information to create knowledge, it is the social part of learning that may best determine whether we simply add information to our brains or truly learn. 

Organizations are scrambling to convert what they do and what they are exposed to into tangible value. In other words: learn. A 2016 report from the Association for Talent Development (ATD) found that “organizations spent an average of $1,252 per employee on training and development initiatives in 2015”, which works out to an average cost per learning hour of $82 based on an average of 33 hours spent in training programs per year. Learning and innovation are expensive.

The massive marketplace for seminars, keynote addresses, TED talks, conferences, and workshops points to a deep pool of opportunities for exposure to content, yet when we look past these events to where and how such learning is converted into changes at the organizational level we see far fewer examples.

Instead of building more educational offerings like seminars, webinars, retreats, and courses, what might happen if they devoted resources to creating cultures for learning to take place? Consider how often you may have been sent off to some learning event, perhaps taken in some workshops or seen an engaging keynote speaker, been momentarily inspired and then returned home to find that yourself no better off in the long run. The reason is that you have no system — time, resources, organizational support, social opportunities in which to discuss and process the new information — and thus, turn a potential learning opportunity into neural ephemera.

Or consider how you may have read an article on something interesting, relevant and important to what you do, only to find that you have no avenue to apply or explore it further. Where do the ideas go? Do they get logged in your head with all the other content that you’re exposed to every day from various sources, lost?

Technical vs. Social

My colleague and friend John Wenger recently wrote about what we need to learn, stating that our quest for technical knowledge to serve as learning might be missing a bigger point: what we need at this moment. Wenger suggests shifting our focus from mere knowledge to capability and states:

What is the #1 capability we should be learning?  Answer: the one (or ones) that WE most need; right now in our lives, taking account of what we already know and know how to do and our current situations in life.

Wenger argues that, while technical knowledge is necessary to improve our work, it’s our personal capabilities that require attention to be sufficient for learning to take hold. These capabilities are always contingent as we humans exist in situated lives and thus our learning must further be applied to what we, in our situation, require. It’s not about what the best practice is in the abstract, but what is best for us, now, at this moment. The usual ‘stuff’ we are exposed to is decontextualized and presented to us without that sense of what our situation is.

The usual ‘stuff’ we are exposed to under the guise of learning is so often decontextualized and presented to us without that sense of how, whether, or why it matters to us in our present situation.

To illustrate, I teach a course on program evaluation for public health students. No matter how many examples, references, anecdotes, or colourful illustrations I provide them, most of my students struggle to integrate what they are exposed to into anything substantive from a practical standpoint. At least, not at first. Without the ability to apply what they are learning, expose the method to the realities of a client, colleague, or context’s situation, they are left abstracting from the classroom to a hypothetical situation.

But, as Mike Tyson said so truthfully and brutally: “Every fighter has a plan until they get punched in the mouth.”

In a reflection on that quote years later, Tyson elaborated saying:

“Everybody has a plan until they get hit. Then, like a rat, they stop in fear and freeze.”

Tyson’s quote applies to much more than boxing and complements Wenger’s assertions around learning for capability. If you develop a plan knowing that it will fail the moment you get hit (and you know you’re going to get hit), then you learn for the capability to adapt. You build on another quote attributed to Dwight D. Eisenhower, who said:

“I have always found that plans are useless but planning is indispensable.”

Better social, better learning

Plans don’t exist in a vacuum, which is why they don’t always turn out. While sometimes a failed plan is due to poor planning, it is more likely due to complexity when dealing with human systems. Complexity requires learning strategies that are different than those typically employed in so many educational settings: social connection.

When information is technical, it may be simple or complicated, but it has a degree of linearity to it where one can connect different pieces together through logic and persistence to arrive a particular set of knowledge outcomes. Thus, didactic classroom learning or many online course modules that require reading, viewing or listening to a lesson work well to this effect. However, human systems require attention to changing conditions that are created largely in social situations. Thus, learning itself requires some form of ‘social’ to fully integrate information and to know what information is worth attending to in the first place. This is the kind of capabilities that Wenger was talking about.

My capabilities within my context may look very much like that of my colleagues, but the kind of relationships I have with others, the experiences I bring and the way I scaffold what I’ve learned in the past with what I require in the present is going to be completely different. The better organizations can create the social contexts for people to explore this, learn together, verify what they learn and apply it the more likely they can reap far greater benefits from the investment of time and money they spend on education.

Design for learning, not just education

We need a means to support learning and support the intentional integration of what we learn into what we do: it fails in bad systems.

It also means getting serious about learning, meaning we need to invest in it from a social, leadership and financial standpoint. Most importantly, we need to emotionally invest in it. Emotional investment is the kind of attractor that motivates people to act. It’s why we often attend to the small, yet insignificant, ‘goals’ of every day like responding to email or attending meetings at the expense of larger, substantial, yet long-term goals.

As an organization, you need to set yourself up to support learning. This means creating and encouraging social connections, time to dialogue and explore ideas, the organizational space to integrate, share and test out lessons learned from things like conferences or workshops (even if they may not be as useful as first thought), and to structurally build moments of reflection and attention to ongoing data to serve as developmental lessons and feedback.

If learning is meant to take place at retreats, conferences or discrete events, you’re not learning for human systems. By designing systems that foster real learning focused on the needs and capabilities of those in that system, you’re more likely to reap the true benefit of education and grow accordingly. That is an enormous return on investment.

Learning requires a plan and one that recognizes you’re going to get punched in the mouth (and do just fine).

Can this be done for real? Yes, it can. For more information on how to create a true learning culture in your organization and what kind of data and strategy can support that, contact Cense and they’ll show you what’s possible. 

Image credit: Social by JD Hancock used under Creative Commons license.

The logic of a $1000 iPhone

TimePhone.jpg

Today Apple is expected to release a new series of iPhone handsets with the base price for one set at more than $1000. While many commentators are focusing on the price, the bigger issue is less about what these new handsets cost, but what value they’ll hold. 

The idea that a handset — once called a phone — that is the size of a piece of bread could cost upward of $1000 seems mind-boggling to anyone who grew up with a conventional telephone. The new handsets coming to market have more computing power built into them than was required for the entire Apollo space missions and dwarf even the most powerful personal computers from just a few years ago. And to think that this computing power all fits into your pocket or purse.

The iPhone pictured above was ‘state of the art’ when it was purchased a few years ago and has now been retired to make way for the latest (until today) version required not because the handset broke, but because it could no longer handle the demands placed on it from the software that powered it and the storage space required to house it all. This was never an issue when people used a conventional telephone because it always worked and it did just one thing really well: allowed people to talk to each other at a distance.

Changing form, transforming functions

The iPhone is as much about technology as it is a vector of change in social life that is a product of and contributor to new ways of interacting. The iPhone (and its handset competitors) did not create the habits of text messaging, photo sharing, tagging, social chat, augmented reality, but it also wasn’t just responding to humans desire to communicate, either. Adam Alter’s recent book Irresistible outlines how technology has been a contributor to behaviours that we would now call addictive. This includes a persistent ‘need’ to look at one’s phone while doing other things, constant social media checking, and an inability to be fully present in many social situations without touching their handset.

Alter presents the evidence from a variety of studies and clinical reports that shows how tools like the iPhone and the many apps that run on it are engineered to encourage the kind of addictive behaviour we see permeating through society. Everything from the design of the interface, to the type of information an app offers a user (and when it provides it), to the architecture of social tools that encourage a type of reliance and engagement that draws people back to their phone, all create the conditions for a device that no longer sits as a mere tool, but has the potential to play a central role in many aspects of life.

These roles may be considered good or bad for social welfare, but in labelling such behaviours or outcomes in this way we risk losing the bigger picture of what is happening in our praise or condemnation. Dismissing something as ‘bad’ can mean we ignore social trends and the deeper meaning behind why people do things. By labelling things as ‘good’ we risk missing the harm that our tools and technology are doing and how they can be mitigated or prevented outright.

PhoneLookingCrowd.jpg

Changing functions, transforming forms

Since the iPhone was first launched, it’s moved from being a phone with a built in calendar and music player to something that now can power a business, serve as a home theatre system, and function as a tour guide. As apps and software evolve to accommodate mobile technology, the ‘clunkiness’ of doing many things on the go like accounting, take high-quality photos, or manage data files has been removed. Now, laptops seem bulky and even tablets, which have evolved in their power and performance to mimic desktops, are feeling big.

The handset is now serving as the tether to each other and creates a connected world. Who wants to lug cables and peripherals with them to and from the office when you can do much of the work in your hand? It is now possible to run a business without a computer. It’s still awkward, but it’s genuinely possible. Financial tools like Freshbooks or Quickbooks allow entrepreneurs to do their books from anywhere and tools like Shopify can transform a blog into a full-fledged e-commerce site.

Tools like Apple Pay have turned your phone into a wallet. Paying with your handset is now a viable option in an increasing number of places.

This wasn’t practical before and now it is. With today’s release from Apple, new tools like 3-D imaging, greatly-improved augmented reality support and enhanced image capture will all be added to the users’ toolkit.

Combine all of this with the social functions of text, chat, and media sharing and the handset has now transformed from a device to a social connector, business driver and entertainment device. There is little that can be done digitally that can’t be done on a handset.

Why does this matter?

It’s easy to get wrapped up in all of this as technological hype, but to do so is to miss some important trends. We may have concern over the addictive behaviours these tools engender, the changes in social decorum the phone instigates, and the fact that it becomes harder to escape the social world when the handset is also serving as your navigation tool, emergency response system, and as an e-reader. But these demands to have everything in your pocket and not strapped to your back, sitting on your desk (and your kitchen table) and scattered all over different tools and devices comes from a desire for simplicity and convenience.

In the midst of the discussion about whether these tools are good or bad, we often forget to ask what they are useful for and not useful for. Socially, they are useful for maintaining connections, but they have shown to be not so useful for building lasting, human connections at depth. They are useful for providing us with near-time and real-time data, but not as useful at allowing us to focus on the present moment. These handsets free us from our desk, but also keep us ‘tied’ to our work.

At the same time, losing your handset has enormous social, economic and (potentially) security consequences. It’s no longer about missing your music or not being able to text someone, when most of one’s communications, business, and social navigation functions are routed through a singular device the implications for losing that device becomes enormous.

Useful and not useful/good and bad

By asking how a technology is useful and not useful we can escape the dichotomy of good and bad, which gets us to miss the bigger picture of the trends we see. Our technologies are principally useful for connecting people to each other (even if it might be highly superficial), enabling quick action on simple tasks (e.g., shopping, making a reservation), finding simple information (e.g., Google search), and navigating unknown territory with known features (e.g., navigation systems). This is based on a desire for connection a need for data and information, and alleviating fear.

Those underlying qualities are what makes the iPhone and other devices worth paying attention to. What other means have we to enhance connection, provide information and help people to be secure? Asking these questions is one way in which we shape the future and provide either an alternative to technologies like the iPhone or better amplify these tools’ offerings. The choice is ours.

There may be other ways we can address these issues, but thus far haven’t found any that are as compelling. Until we do, a $1000 for a piece of technology that does this might be a bargain.

Seeing trends and developing a strategy to meet them is what foresight is all about. To learn more about how better data and strategy through foresight can help you contact Cense

Image credits: Author

 

A mindset for developmental evaluation

BoyWithGlasses.jpg

Developmental evaluation requires different ways of thinking about programs, people, contexts and the data that comes from all three. Without a change in how we think about these things, no method, tool, or approach will make an evaluation developmental or its results helpful to organizations seeking to innovate, adapt, grow, and sustain themselves. 

There is nothing particularly challenging about developmental evaluation (DE) from a technical standpoint: for the most part, a DE can be performed using the same methods for data collection as other evaluations. What stands DE apart from those other evaluations is less the methods and tools, but the thinking that goes into how those methods and tools are used. This includes the need to ensure that sensemaking is a part of the data analysis plan because it is almost certain that some if not the majority of the data collected will not have an obvious meaning or interpretation.

Without developmental thinking and sensemaking, a DE is just an evaluation with a different name

This is not a moot point, yet the failure of organizations to adopt a developmental mindset toward its programs and operations is (likely) the single-most reason for why DE often fails to live up to its promise in practice.

No child’s play

If you were to ask a five-year old what they want to be when they grow up you might hear answers like a firefighter, princess, train engineer, chef, zookeeper, or astronaut. Some kids will grow up and become such things (or marry accordingly for those few seeking to become princesses or they’ll work for Disney), but most will not. They will become things like sales account managers, marketing directors, restaurant servers, software programmers, accountants, groundskeepers and more. While this is partly about having the opportunity to pursue a career in a certain field, it’s also about changing interests.

A five-year old that wants to be a train engineer might seem pretty normal, but one that wants to be an accountant specializing in risk management in the environmental sector would be considered odd. Yet, it’s perfectly reasonable to speak to a 35-year-old and find them excited about being in such a role.

Did the 35-year-old that wanted to be a firefighter when they were five but became an accountant, fail? Are they a failed firefighter? Is the degree to which they fight fires in their present day occupation a reasonable indicator of career success?

It’s perfectly reasonable to plan to be a princess when you’re five, but not if you’re 35 or 45 or 55 years old unless you’re currently dating a prince or in reasonable proximity to one. What is developmentally appropriate for a five-year-old is not for someone seven times that age.

Further, is a 35-year-old a seven-times better five-year-old? When you’re ten are you twice the person you were when you were five? Why is it OK to praise a toddler for sharing, not biting or slapping their peers, and eating all their vegetables and weird to do it with someone in good mental health in their forties or fifties? It has to do with developmental thinking.

It has to do with a developmental mindset.

Charting evolutionary pathways

We know that as people develop through stages, ages and situations the knowledge, interests, and capacities that a person has will change. We might be the same person and also a different person than the one we were ten years ago. The reason is that we evolve and develop as a person based on a set of experiences, genetics, interests, and opportunities that we encounter. While there are forces that constrain these adaptations (e.g., economics, education, social mobility, availability of and access to local resources), we still evolve over time.

DE is about creating the data structures and processes to understand this evolution as it pertains to programs and services and help to guide meaningful designs for evolution. DE is a tool for charting evolutionary pathways and for documenting the changes over time. Just as putting marks on the wall to chart a child’s growth, taking pictures at school, or writing in a journal, a DE does much of the same thing (even with similar tools).

As anyone with kids will tell you, there are a handful of decisions that a parent can make that will have sure-fire, predictable outcomes when implemented. Many of them are created through trial-and-error and some that work when a child is four won’t work when the child is four and five months. Some decisions will yield outcomes that approximate an expected outcome and some will generate entirely unexpected outcomes (positive and negative). A good parent is one who pays attention to the rhythms, flows, and contexts that surround their child and themselves with the effort to be mindful, caring and compassionate along the way.

This results in no clear, specific prototype for a good parent that can reliably be matched to any kid, nor any highly specific, predictable means of determining who is going to be a successful, healthy person. Still, many of us manage to have kids we can proud of, careers we like, friendships we cherish and intimate relationships that bring joy despite no means of predicting how any of those will go with consistency. We do this all the time because we approach our lives and those of our kids with a developmental mindset.

Programs as living systems

DE is at its best a tool for designing for living systems. It is about discerning what is evolving (and at what rate/s) and what is static within a system and recognizing that the two conditions can co-exist. It’s the reason why many conventional evaluation methods still work within a DE context. It’s also the reason why conventional thinking about those methods often fails to support DE.

Living systems, particularly human systems, are often complex in their nature. They have multiple, overlapping streams of information that interact at different levels, time scales and to different effects inconsistently or at least to a pattern that is only partly ever knowable. This complexity may include simple relationships and more complicated ones, too. Just as a conservation biologist might see a landscape that changes, they can understand what changes are happening quickly, what isn’t, what certain relationships are made and what ones are less discernible.

As evaluators and innovators, we need to consider how our programs and services are living systems. Even something as straightforward as the restaurant industry where food is sought and ordered, prepared, delivered and consumed, then finished has elements of complexity to it. The dynamics of real-time ordering and tracking, delivery, shifting consumer demand, the presence of mobile competitors (e.g., food trucks), changing regulatory environment, novelty concepts (e.g., pop-ups!), and seasonality of food demand and supply has changed how the food preparation business is run.

A restaurant might not just be a bricks-and-mortar operation now, but a multi-faceted, dynamic food creation environment. The reason could be that even if they are good at what they did if everything around them is changing they could still deliver consistently great food and service and fail. They may need to change to stay the same.

This only can happen if we view our programs as living systems and create evaluation mechanisms and strategies that view them in that manner. That means adopting a developmental mindset within an organization because DE can’t exist without it.

If a developmental evaluation is what you need or you want to learn more about how it can serve your needs, contact Cense and inquire about how they can help you. 

Image Credit: Thinkstock used under license.

Learning fails in bad systems

2348137226_2d6536745e_o_Edits.jpgEnormous energy is spent on developing strategies to accomplish things with comparatively little paid to the systems that they are being deployed in. A good strategy works by design and that means designing systems that improve the likelihood of their success rather than fight against them and this is no truer than in the effort to learn on the job.

 

A simple search of the literature — gray or academic — will find an enormous volume of resources on how to design, implement and support learning for action in organizations. At an individual level, there are countless* articles on personal change, self-improvement, and performance ‘hacks’ that individuals can do to better themselves and supposedly achieve more in what they do.

Psychology and related behavioural sciences have spent inordinate time learning how individuals and organizations change by emphasizing specific behaviours, decision processes, and data that can support action. A close inspection will find that relatively few strategies produce consistent results and this has to do less with execution, skill or topic and more with the system in which these strategies are introduced.

To illustrate this, consider the role of learning in the organization and how our strategies to promote it ultimately fail when our systems are not designed to support it.

Knowledge integration: A case study

Consider the example of attending a conference as a means of learning and integrating knowledge into practice.

Surajit Bhattacharya published a primer for how to get value from conferences in medicine, pointing to tips and strategies that a medical practitioner can take such as arriving a day early (so you’re not groggy), planning out your day, and be social. These are all practical, logical suggestions, yet they are premised upon a number of things that we might call system variables. These include:

  • The amount of control you have over your schedule week-to-week.
  • The availability of transportation and accommodation options that suit your schedule, budget, and preferences.
  • The nature and type of work you do, including the amount of hours and intensity of the work you perform in a typical week. This will determine the amount of energy you have and the readiness to be attentive.
  • The volume of email and other digital communications (e.g., messages and updates via social media, chat, project management platforms) you receive on a daily basis and the nature of those kinds of messages (e.g.urgency and importance).
  • The amount and nature of travel required to both attend the event and the amount you had prior to attending the event.
  • The level of rest you’ve had. Sleep amount, timing, and quality all factor into how much rest you get. Add in the opportunity to engage in an activity like walking, exercise or stretching that one might do and we see a number of factors that could influence learning performance.
  • The setting. The lighting, air quality and air flow, seat arrangement, room acoustics, and access to some natural light are all factors in our ability to attend to and engage with a learning event.
  • The quality and format of the content and its delivery. Speaker quality, preparation, content and overall performance will all contribute to the ability to convey information and engage the audience.
  • Food and drink. Are you eating the kinds of foods and beverages that enable your body’s performance? Do you have access to these foods and drinks? Are they served at times that suit your body?
  • Your level of comfort and skill at engaging strangers. This matters if you’re more introverted, dislike small talk, or are not energized by others.

These are all platform issues: those in which motivation and energy can be channeled to focus on and engage with learning content. The fewer of these factors present the greater the energy expenditure needed on the part of the learner.

Learning within systems

W. Edwards Deming noted that most of the issues of performance in any organization were due to processes and systems (estimated to be up to 85% or more) rather than individual employees. While Deming was referring largely to manufacturing contexts, the same might be said for learning.

Consider our example from earlier about the conference. We’ve already outlined the factors that could contribute to learning at the conference itself, but let’s extend the case further to what happens after the conference. After all, a surgeon, engineer, computer programmer, law clerk, or carpenter isn’t going to practice her or his craft at the conference; they’ll do it when they return to regular work.

Now consider what our attendee encounters after they have made the trip home to apply this newfound learning:

  • A backlog of emails, phone messages and other correspondence that has either been left untouched, scantly attended to, or fully managed. In the first case, the backlog might be high and requires a considerable amount of time and energy to ‘catch up’ on upon return, however at least the learner was fully present to perform the many activities suggested byBhattacharya in the earlier article. In the second case, there is a higher than usual amount to attend to and the learner might have been selectively disengaged from the learning event. In the third, the learner returns to usual life without a backlog but may have sacrificed considerable attention toward the usual correspondence than actually learning.
  • A backlog of meetings. Scheduled meetings, calls or other events that require a co-presence (virtual or physical) that were put off due to travel are now picked up.
  • A backlog of administrative tasks. Submitting receipts and conference expenses, regular accounting or administrative tasks are all things that either was left untouched or, in the case of submitting expenses, unlikely or impossible to do until the trip has returned.
  • Fatigue. Sitting in a conference can be exhausting, particularly because of the conditions of the rooms, the volume of content and the break in the routine of every day (which can be energizing, too). Add in any travel issues that might arise and there is a reasonable chance that a person is not in an optimal state to take what they have been exposed to and apply it.
  • The usual organization processes and structures. Are there are opportunities to reflect upon, discuss, and process what has been learned with others and spaces to apply those lessons directly with appropriate feedback? How often have we been exposed to inspiring or practical content only to find few opportunities to apply it in practice upon our return in enough time before the details of the lessons fade?

It’s not reasonable to expect to have optimal conditions in our work much of the time, if ever. However, as you can see there are a lot of factors that contribute to our capacity to learn and the required energy needed to take what we’ve been exposed to and integrate it into our work. The fewer of these situations in place, the greater the likelihood that the investment in the learning experience will be lost.

An organization or individual requires a platform for learning that includes systems that allow for learners to be at their best and to provide a means for them to take what they learn and apply it — if it’s valuable. Otherwise, why invest in it?

This isn’t to say that no good can come from a conference, but if the main focus is on actual learning and the application of knowledge to the betterment of an organization and individual why would we not invest in the platform to make use of that rather than discarding it.

Rethinking our systems

When I was doing evaluation work in continuing medical education I was amazed to see how often learning events were held at 7 or 8 am. The rationale was that this was often tied to shift changes at hospitals and were the one time of day when most physicians were least likely to have other appointments. This was also the time when physicians were either highly fatigued from a night shift or having battled traffic on their commute to work or were planning the rest of their day ahead — all circumstances when they might be least focused on actually learning.

This choice of time was done for scheduling purposes, not for learning purposes. Yet, the stated purpose of continuing education was to promote learning and its various outcomes. Here, the strategy was to expose medical professionals to necessary, quality content to keep them informed and skilled and doing it at a time that appeared most convenient for all is an example of an idea that had logic to it, but ultimately failed in most regards.

How? If one looked at the evaluation data, typically the results suggested this strategy wasn’t so bad. Most often post-event surveys suggested that the overall ratings were consistently high. Yet a closer look at the data yields some questions.

For example, the questions asked to assess impact were things like: did the presenter speak clearly? or did the presenter provide the content they said they would? In most cases, participants were asked if the speaker arrived on time, presented what they said they would, were intelligible and whether there was a chance the learner might find useful what was presented. It had little to no bearing on whether the content was appropriate, impactful or applied in practice. This is because the system for evaluation was based on a model of knowledge transmission: content is delivered to a person and, assuming the content is good, the lesson is learned.

We know this to be among the weakest forms of moving knowledge to action and certainly not something suited to more complex situations or conditions, particularly in health systems. This is still what prevails.

Design for learning

If you’re seeking to promote learning and create a culture where individuals across an organization can adapt, develop, and grow learning requires much more than simply sending people to conferences, hosting seminars, providing books and other materials or watching some instructional videos. Without a means to integrate and promote that new knowledge as part of a praxis, organizations and individuals alike will continue to get frustrated, lag in their efforts to anticipate and respond to changing conditions and will ultimately fail to achieve anything close to their potential.

Designing for learning is as much about a curriculum as the context for how that curriculum is delivered and how learners are set up to engage with it all in their organizations and everyday lives.

*This is literally the case because the volume of new articles being published daily is so high.

If you’re looking to create learning systems in your organization, visit Cense to explore what it can do for you in shaping your strategy and evaluation to support sustainable, impactful learning for complex conditions. 

Image credit: “Platform” by Martin L is licensed under CC BY 2.0

 

Reframing change

35370592905_0f635938ce_k.jpg

Change is one of the few universal constants as things — people, planet, galaxy — are always in some state of movement, even if it’s imperceptible. Change is also widely discussed and desired, but often never realized in part because we’ve treated something nuanced as over-simplified; it’s time to change. 

For something so omnipresent in our universe, change is remarkably mysterious.

Despite the enormous amount of attention paid to the concept of change, innovation, creation, creativity, and such we have relatively little knowledge of change itself. A look at the academic literature on change would suggest that most of human change is premeditated, planned and rational. Much of this body of literature is focused on health behaviours and individual-level change and draws on a narrow band of ‘issues’ and an over-reliance on linear thinking. At the organization level, evidence on the initiation, success, and management of change is scattered, contradictory and generally bereft of clear, specific recommendations on how to deal with change. Social and systems change are even more elusive, with much written on concepts like complexity and system dynamics without much evidence to guide how those concepts are to be practically applied.

Arguments can be made that some of the traditional research designs don’t work for understanding complex change and the need to match the appropriate research and intervention design to the type of system in order to be effective.  These are fair, useful points. However, anyone engaged in change work at the level where the work is being done, managed and led might also argue that the fit between change interest, even intention, and delivery is far lower than many would care to admit.

The issue is that without the language to describe what it is we are doing, seeing and seeking to influence (change) it’s easy to do nothing — and that’s not an option when everything around us is changing.

Taking the plunge

“The only way to make sense out of change is to plunge into it, move with it, and join the dance.” – Alan Watts

Dogs, unlike humans, never take swim lessons. Yet, a dog can jump into a lake for the first time and start swimming by instinct. Humans don’t fare as well and it is perhaps a good reason why we tend to pause when a massive change (like hopping in a pool or a lake) presents itself and rely both on contemplation and action — praxis — to do many things for the first time. Still, spend any time up near a cottage or pool in the summer and you’ll see people swimming in droves.

The threat of water, change of fear of the unknown doesn’t prevent humans from swimming or riding a bike or playing a sport or starting a new relationship despite the real threats (emotional, physical, and otherwise) that come with all of them.

Funny that we have such a hard time drawing praxis, patience, and sensemaking into our everyday work in a manner that supports positive change, rather than just reactive change. The more we can learn about what really supports intentional change and create the conditions that support that, the more likely we’ll be swimming and not just stuck on the shore.

Whatever it takes

“If you don’t like change, you’re going to like irrelevance even less.”—General Eric
Shinseki, retired Chief of Staff, U. S. Army

“It’s just not a good time right now”

“We’re really busy”

“I’m just waiting on (one thing)”

“We need more information”

These are some of the excuses that individuals and organizations give for not taking action that supports positive change, whatever that might be. Consultants have a litany of stories about clients who hired them to support change, develop plans, even set out things like SMART goals, only to see little concrete action take place; horses are led to water, but nothing is consumed.

One of the problems with change is that it is lumped into one large category and treated as if it is all the same thing: to make or become different (verb) or the act or instance of making or becoming different (noun). It’s not. Just as so many things like waves, moods, or decision-making strategies are different, so too is change. Perhaps it is because we continue to view change as a monolithic ‘thing’ without the nuance that we afford other similarly important topics that we have such trouble with it. It’s why surfers have a language for waves and the conditions around the wave: they want to be better at riding them, living with them and knowing when to fear and embrace them.

What is similar to the various forms that change might take is the threat of not taking it seriously. As the above quote articulates, the threat of not changing is real even if won’t be realized right away. Irrelevance might be because you are no longer doing what’s needed, offering value, or you’re simply not effective. Unfortunately, by the time most realize they are becoming irrelevant they already are.

Whatever it takes requires knowing whatever it takes and that involves a better sense of what the ‘it’ (change) is.

Surfing waves of change

To most of us, waves on the beach are classified as largely ‘big’ or ‘small’ or something simple like that. To a surfer, the conversation about a wave is far more delicate, nuanced and far less simplistic. A surfer looks at things like wind speed, water temperature, the location of the ‘break’ and the length of the break, the vertical and horizontal position of the wave and the things like the length of time it takes to form. Surfers might have different names for these waves or even no words at all, just feelings, but they can discern differences and make adjustments based on these distinctions.

When change is discussed in our strategic planning or organizational change initiatives, it’s often described in terms of what it does, rather than what it is. Change is described as ‘catastrophic‘ or ‘disruptive‘ or simply hard, but rarely much more and that is a problem for something so pervasive, important, and influential on our collective lives. It is time to articulate a taxonomy of change as a place to give change agents, planners, and everyone a better vocabulary for articulating what it is they are doing, what they are experiencing and what they perceive.

By creating language better suited to the actual problem we are one step further toward being better at addressing change-related problems, adapting, and preventing them than simply avoiding them as we do now.

Time to take the plunge, get into the surf and swim around.

 

 

Image credit: June 17, 2017 by Mike Sutherland used under Creative Commons License via Flickr. Thanks for sharing Mike!

Exploding goals and their myths

SparkWall_Snapseed.jpg

Goal-directed language guides much of our social policy, investment and quests for innovation without much thought of what that means in practice. Looking at the way ideas start and where they carry us might offer us reasons to pause when fashioning goals and whether we need them at all. 

In a previous article, I discussed the problems with goals for many of the problems being dealt with by organizations and networks alike. (Thanks to the many readers who offered comments and kudos and also alerted me that subscribers received the wrong version minus part of the second paragraph!). At aim was the use of SMART goal-setting and how it made many presumptions that are rarely held as true.

This is a follow-up to that to discuss how a focus on the energy directed toward a goal and how it can be integrated more tightly with how we organize our actions at the outset might offer a better option than addressing the goals themselves.

Change: a matter of energy (and matter)

goal |ɡōlnoun:  the object of a person’s ambition or effort; an aim or desired result • the destination of a journey

A goal is a call to direct effort (energy) toward an object (real or imagined). Without energy and action, the goal is merely a wish. Thus, if we are to understand goals in the world we need to have some concept of what happens between the formation of the goal (the idea, the problem to solve, the source of desire), the intention to pursue such a goal, and what happens on the journey toward that goal. That journey may involve a specific plan or it may mean simply following something (a hunch, a ‘sign’ — which could be purposeful, data-driven or happenstance, or some external force) along a pathway.

SMART goals and most of the goal-setting literature takes the assumption that a plan is a critical success factor in accomplishing a goal.

If you follow SMART, Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely) this plan needs to have these qualities attached to them. This approach makes sense when your outcome is clear and the pathway to achieving the goal is also reasonably clear such as smoking cessation, drug or alcohol use reduction, weight loss and exercise. It’s the reason why so much of the behaviour change literature includes goals: because most of it involves studies of these kinds of problems. These are problems with a clear, measurable outcome (even if that has some variation to it). You smoke cigarettes or you don’t. You weigh X kilograms at this time point and Y kilograms at that point.

These outcomes (goals) are the areas where the energy is directed and there is ample evidence to support means to get to the goal, the energy (actions) used to reach the goal, and the moment the goal is achieved. (Of course, there are things like relapse, temporary setbacks, non-linear changes, but researchers don’t particularly like to deal with this as it complicates things, something clinicians know too well).

Science, particularly social science, has a well-noted publication bias toward studies that show something significant happened — i.e., seeing change. Scientists know this and thus consciously and unconsciously pick problems, models, methods and analytical frameworks that better allow them to show that something happened (or clearly didn’t), with confidence. Thus, we have entire fields of knowledge like behaviour change that are heavily biased by models, methods and approaches designed for the kind of problems that make for good, publishable research. That’s nice for certain problems, but it doesn’t help us address the many ones that don’t fit into this way of seeing the world.

Another problem is much less on the energy, but on the matter. We look at specific, tangible outcomes (weight, presence of cigarettes, etc..) and little on the energy directed outward. Further, these perspectives assume a largely linear journey. What if we don’t know where we’re going? Or we don’t know what, specifically, it will take to get to our destination (see my previous article for some questions on this).

Beyond carrots & sticks

The other area where there is evidence to support goals is from management and study of its/ executives or ‘leaders’ (ie. those who are labelled leaders and might be because of title or role, but whether they actually inspire real, productive followership is another matter). These leaders call out a directive and their employees respond. If employees don’t respond, they might be fired or re-assigned — two outcomes that are not particularly attractive to most workers. On the surface it seems like a remarkably effective way of getting people motivated to do something or reach a goal and for some problems it works well. However, those type of problem sets are small and specific.

Yet, as much of the research on organizational behaviour has shown (PDF), the ‘carrot and stick’ approach to motivation is highly limited and ineffective in producing long-term change and certainly organizational commitment. Fostering self-determination, or creating beauty in work settings — something not done by force, but by co-development — are ways to nurture employee happiness, commitment and engagement overall.

A 2009 study, appropriately titled ‘Goals Gone Wild’ (PDF), looked at the systemic side-effects of goal-setting in organizations and found: “specific side effects associated with goal setting, including a narrow focus that neglects non-goal areas, a rise in unethical behavior, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation.” The authors go on to say in the paper — right in the abstract itself!: “Rather than dispensing goal setting as a benign, over-the-counter treatment for motivation, managers and scholars need to conceptualize goal setting as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision.”

Remember the last time you were in a meeting when a senior leader (or anyone) ensured that there was sufficient time, care and attention paid to considering the harmful side-effects of goals before unleashing them? Me neither.

How about the ‘careful dosing’ or ‘close supervision’ of activities once goal-directed behaviour was put forth? That doesn’t happen much, because process-focused evaluation and the related ongoing sense-making is something that requires changes in the way we organize ourselves and our work. And as a recent HBR article points out: organizations like to use the excuse that organizational change is hard as a reason not to make the changes necessary.

Praxis: dropping dualisms

The absolute dualism of goal + action is as false as the idea of theory + practice, thought + activity. There are areas like those mentioned above where that conception might be useful, yet these are selective and restrictive and can keep us focused on a narrow band of problems and activity. Climate change, healthy workplaces, building cultures of innovation, and creating livable cities and towns are not problem sets that have a single answer, a straightforward path, specific goals or boundless arrays of evidence guiding how to address them with high confidence. They do require a lot of energy, pivoting, adapting, sense-making and collaboration. They are also design problems: they are about making the world we want and reacting the world we have at the same time.

If we’re to better serve our organizations and their greater purpose, leaders, managers, and evaluators would be wise to focus on the energy that is being used, by whom, when, how and to what effect at more close intervals to understand the dynamics of change, not just the outcomes of it. This approach is one oriented toward praxis, an orientation that sees knowledge, wisdom, learning, strategy and action as combined processes that ought not be separated. We learn from what we do and that informs what we do next and what we learn further. It’s also about focusing on the process of design — that creation of the world we live in.

If we position ourselves as praxis-oriented individuals or organizations, evaluation is part of regular attending to the systems we design to support goals or outcomes through data and sensemaking. Strategy is linked to this evaluation and the outcomes that emerge from it all is what comes from our energy. Design is how we put it all together. This means dropping our dualisms and focusing more on integrating ourselves, our aspirations and our activities together toward achieving something that might be far greater than any goal we can devise.

Image credit: Author

 

 

Smart goals or better systems?

IMG_1091.jpg

If you’re working toward some sort of collective goals — as an organization, network or even as an individual — you’ve most likely been asked to use SMART goal setting to frame your task. While SMART is a popular tool for management consultants and scholars, does it make sense when you’re looking to make inroads on complex, unique or highly volatile problems or is the answer in the systems we create to advance goals in the first place?  

Goal setting is nearly everywhere.

Globally we had the UN-backed Millennium Development Goals and now have the Sustainable Development Goals and a look at the missions and visions of most corporations, non-profits, government departments and universities and you will see language that is framed in terms of goals, either explicitly or implicitly.

A goal for this purposes is:

goal |ɡōl| noun:  the object of a person’s ambition or effort; an aim or desired result the destination of a journey

Goal setting is the process of determining what it is that you seek to achieve and usually combined with mapping some form of strategy to achieve the goal. Goals can be challenging on their own when a single person is determining what it is that they want, need or feel compelled to do, even more so when aggregated to the level of the organization or a network.

How do you keep people focused on the same thing?

A look at the literature finds a visible presence of one approach: setting SMART goals. SMART goals reflect an acronym that stands for Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely in some examples). The origin of SMART has been traced back to an article in the 1981 issue of the AMA’s journal Management Review by George Doran (PDF). In that piece, Doran comments how unpleasant it is to set objectives and that this is one of the reasons organizations resist it. Yet, in an age where accountability is held in high regard the role of the goal is not only strategic, but operationally critical to attracting and maintaining resources.

SMART goals are part of a larger process called performance management, which is a means by enhancing collective focus and alignment of individuals within an organization . Dartmouth College has a clearly articulated explanation of how goals are framed within the context of performance management:

” Performance goals enable employees to plan and organize their work in accordance with achieving predetermined results or outcomes. By setting and completing effective performance goals, employees are better able to:

  • Develop job knowledge and skills that help them thrive in their work, take on additional responsibilities, or pursue their career aspirations;
  • Support or advance the organization’s vision, mission, values, principles, strategies, and goals;
  • Collaborate with their colleagues with greater transparency and mutual understanding;
  • Plan and implement successful projects and initiatives; and
  • Remain resilient when roadblocks arise and learn from these setbacks.”

Heading somewhere, destination unknown

Evaluation professionals and managers alike love SMART goals and performance measurement. What’s not to like about something that specifically outlines what is to be done in detail, the date its required by, and in a manner that is achievable? It’s like checking off all the boxes in your management performance chart all at once! Alas, the problems with this approach are many.

Specific is pretty safe, so we won’t touch that. It’s good to know what you’re trying to achieve.

But what about measurable? This is what evaluators love, but what does it mean in practice? Metrics and measures reflect a certain type of evaluative approach and require the kind of questions, data collection tools and data to work effectively. If the problem being addressed isn’t something that lends itself to quantification using measures or data that can easily define a part of an issue, then measurement becomes inconclusive at best, useless at worst.

What if you don’t know what is achievable? This might be because you’ve never tried something before or maybe the problem set has never existed before now.

How do you know what realistic is? This is tricky because, as George Bernard Shaw wrote in Man and Superman:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

This issue of reasonableness is an important one because innovation, adaptation and discovery are not able reason, but aspiration and hope. Were it for reasonableness, we might have never achieved much of what we’ve set out to accomplish in terms of being innovative, adaptive or creative.

Reasonableness is also the most dangerous for those seeking to make real change and do true innovation. Innovation is not often reasonable, nor are the asks ‘reasonable.’ Most social transformations were not because of reasonable. Women’s rights to vote  or The rights of African Americans to be recognized and treated as human beings in the United States are but two examples from the 20th century that are able lack of ‘reasonableness’.

Lastly, what if you have no idea what the timeline for success is? If you’ve not tackled this before, are working on a dynamic problem, or have uncertain or unstable resources it might be impossible to say how long something will take to solve.

Rethinking goals and their evaluation

One of the better discussions on goals, goal setting and the hard truths associated with what it means to pursue a goal is from James Clear, who draws on some of the research on strategy and decision making to build his list of recommendations. Clear’s summary pulls together a variety of findings that show how individuals construct goals and seek to achieve them and the results suggest that the problem is less about the strategy used to reach a goal, but more on the goals themselves.

What is most relevant for organizations is the concept of ‘rudders and oars‘, which is about creating systems and processes for action and less on the goal itself. In complex systems, our ability to exercise control is highly variable and constrained and goals provide an illusory sense that we have control. So either we fail to achieve our goals or we set goals that we can achieve, which may not be the most important thing we aim for. We essentially rig our system to achieve something that might be achievable, but utterly not important.

Drawing on this work, we are left to re-think goals and commit to the following:

  1. Commit to a process, not a goal
  2. Release the need for immediate results
  3. Build feedback loops
  4. Build better systems, not better goals

To realize this requires an adaptive approach to strategy and evaluation where the two go hand-in-hand and are used systemically. It means pushing aside and rejecting more traditional performance measurement models for individuals and organizations and developing more fine-tuned, custom evaluative approaches that link data to decisions and decisions to actions in an ongoing manner.

It means thinking in systems, about systems and designing for ways to do both on an ongoing, not episodic manner.

The irony is, by minimizing or rejecting the use of goals, you might better achieve a goal of a more impactful, innovative, responsive and creative organization with a real mission for positive change.

 

 

Image credit: Author

 

%d bloggers like this: