Tag: decision making

psychologysystems thinking

Smart goals or better systems?

IMG_1091.jpg

If you’re working toward some sort of collective goals — as an organization, network or even as an individual — you’ve most likely been asked to use SMART goal setting to frame your task. While SMART is a popular tool for management consultants and scholars, does it make sense when you’re looking to make inroads on complex, unique or highly volatile problems or is the answer in the systems we create to advance goals in the first place?  

Goal setting is nearly everywhere.

Globally we had the UN-backed Millennium Development Goals and now have the Sustainable Development Goals and a look at the missions and visions of most corporations, non-profits, government departments and universities and you will see language that is framed in terms of goals, either explicitly or implicitly.

A goal for this purposes is:

goal |ɡōl| noun:  the object of a person’s ambition or effort; an aim or desired result the destination of a journey

Goal setting is the process of determining what it is that you seek to achieve and usually combined with mapping some form of strategy to achieve the goal. Goals can be challenging on their own when a single person is determining what it is that they want, need or feel compelled to do, even more so when aggregated to the level of the organization or a network.

How do you keep people focused on the same thing?

A look at the literature finds a visible presence of one approach: setting SMART goals. SMART goals reflect an acronym that stands for Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely in some examples). The origin of SMART has been traced back to an article in the 1981 issue of the AMA’s journal Management Review by George Doran (PDF). In that piece, Doran comments how unpleasant it is to set objectives and that this is one of the reasons organizations resist it. Yet, in an age where accountability is held in high regard the role of the goal is not only strategic, but operationally critical to attracting and maintaining resources.

SMART goals are part of a larger process called performance management, which is a means by enhancing collective focus and alignment of individuals within an organization . Dartmouth College has a clearly articulated explanation of how goals are framed within the context of performance management:

” Performance goals enable employees to plan and organize their work in accordance with achieving predetermined results or outcomes. By setting and completing effective performance goals, employees are better able to:

  • Develop job knowledge and skills that help them thrive in their work, take on additional responsibilities, or pursue their career aspirations;
  • Support or advance the organization’s vision, mission, values, principles, strategies, and goals;
  • Collaborate with their colleagues with greater transparency and mutual understanding;
  • Plan and implement successful projects and initiatives; and
  • Remain resilient when roadblocks arise and learn from these setbacks.”

Heading somewhere, destination unknown

Evaluation professionals and managers alike love SMART goals and performance measurement. What’s not to like about something that specifically outlines what is to be done in detail, the date its required by, and in a manner that is achievable? It’s like checking off all the boxes in your management performance chart all at once! Alas, the problems with this approach are many.

Specific is pretty safe, so we won’t touch that. It’s good to know what you’re trying to achieve.

But what about measurable? This is what evaluators love, but what does it mean in practice? Metrics and measures reflect a certain type of evaluative approach and require the kind of questions, data collection tools and data to work effectively. If the problem being addressed isn’t something that lends itself to quantification using measures or data that can easily define a part of an issue, then measurement becomes inconclusive at best, useless at worst.

What if you don’t know what is achievable? This might be because you’ve never tried something before or maybe the problem set has never existed before now.

How do you know what realistic is? This is tricky because, as George Bernard Shaw wrote in Man and Superman:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

This issue of reasonableness is an important one because innovation, adaptation and discovery are not able reason, but aspiration and hope. Were it for reasonableness, we might have never achieved much of what we’ve set out to accomplish in terms of being innovative, adaptive or creative.

Reasonableness is also the most dangerous for those seeking to make real change and do true innovation. Innovation is not often reasonable, nor are the asks ‘reasonable.’ Most social transformations were not because of reasonable. Women’s rights to vote  or The rights of African Americans to be recognized and treated as human beings in the United States are but two examples from the 20th century that are able lack of ‘reasonableness’.

Lastly, what if you have no idea what the timeline for success is? If you’ve not tackled this before, are working on a dynamic problem, or have uncertain or unstable resources it might be impossible to say how long something will take to solve.

Rethinking goals and their evaluation

One of the better discussions on goals, goal setting and the hard truths associated with what it means to pursue a goal is from James Clear, who draws on some of the research on strategy and decision making to build his list of recommendations. Clear’s summary pulls together a variety of findings that show how individuals construct goals and seek to achieve them and the results suggest that the problem is less about the strategy used to reach a goal, but more on the goals themselves.

What is most relevant for organizations is the concept of ‘rudders and oars‘, which is about creating systems and processes for action and less on the goal itself. In complex systems, our ability to exercise control is highly variable and constrained and goals provide an illusory sense that we have control. So either we fail to achieve our goals or we set goals that we can achieve, which may not be the most important thing we aim for. We essentially rig our system to achieve something that might be achievable, but utterly not important.

Drawing on this work, we are left to re-think goals and commit to the following:

  1. Commit to a process, not a goal
  2. Release the need for immediate results
  3. Build feedback loops
  4. Build better systems, not better goals

To realize this requires an adaptive approach to strategy and evaluation where the two go hand-in-hand and are used systemically. It means pushing aside and rejecting more traditional performance measurement models for individuals and organizations and developing more fine-tuned, custom evaluative approaches that link data to decisions and decisions to actions in an ongoing manner.

It means thinking in systems, about systems and designing for ways to do both on an ongoing, not episodic manner.

The irony is, by minimizing or rejecting the use of goals, you might better achieve a goal of a more impactful, innovative, responsive and creative organization with a real mission for positive change.

 

 

Image credit: Author

 

knowledge translationsocial systemssystems thinking

When More is Less: The Information Paradox

3187207970_5e4673af9d_o

There is a point at which information ceases to increase knowledge and understand and begins to undermine it, creating a paradox. When information on nearly anything is more abundant than ever the choices we make about how to engage it become more important than ever. 

The Information Age has been described as the period where industrial production was replaced by knowledge production as the key driver of social and economic benefit for society. Underpinning the thinking behind the information age (and the digital revolution that accompanies it) is that having more information, more access to it and improved tools to use it to solve problems will improve life for everyone. Presented with a choice to have access to more information or less people will almost always choose more.

More information leads to more options, which equals more choice and more choice is about freedom and that is seen as an inherent social good derived from the capitalist system, which further leads to better choices, more freedom and greater happiness overall. At least, this is what we’ve been led to believe and Barry Schwartz explains this quite eloquently in the opening of talk embedded later in this post.

This is the theory of change that underpins information theory as its played out in modern capitalist societies. It’s also the source of many of our modern paradoxes and problems.

Systems of influence: The case of the ePatient

I’ve stopped going to health-related hackathons and design jams altogether for the simple reason that one can almost always guarantee that one third or more of the solutions generated will be some form of app or information-focused tool. These well-meaning, creative tools are part of a consumer health movement that is all about putting information in the hands of patients with the idea that putting information in the hands of patients is the key to empowerment and better health outcomes, except they rarely lead to this promised land.

Few are better at explaining — and indeed living — this reality than Dave deBronkart or ‘e-Patient Dave’ who has been a tireless advocate for better information tools, access and engagement on health for patients. His Ted Talk captures the spirit of the movement nicely.

With all due respect to the positive sentiments around what the ePatient movement is about, it is based on a series of assumptions about health systems, patients and health itself in ways that don’t always hold. For certain patients, certain conditions, and certain contexts having more information delivered in the right format is indeed empowering and may be life saving as deBronkart’s story illustrates. But what’s often missing from these stories of success are the many layers of assumptions and conditions that underpin information-driven healthcare.

A few years ago I interviewed a patient who spoke about his experience with health care decision-making and information technology and his response was that having more information didn’t make his life much better, rather it made it even more complicated because with more access to more information he had more responsibility related to that information.

“I don’t know what to do with it all and there’s an assumption that once I know (this health information) I am in a position to do something. I don’t have the foggiest idea what to do, that’s why I am going to see (the health professionals) because that is what their job is for. They are the ones who are supposed to know what is to be done. It’s their world, not mine.”

This case is less about deferral to authority, but about resources (e.g., knowledge, skill, time, networks, etc..) and expectations around what comes with those resources. When you are unwell the last thing you want is to be told you have even more work to do.

The assumptions around personal health information and decision-making are that people have:

1) access to the data in the first place, 2) time, 3) information gathering tools, 4) knowledge synthesis tools, 5) skill and knowledge of how to sift, sort, synthesize and sense-make all the information obtained (because it may be in different formats, incomplete, or require conversions), 6) access to the people and other knowledge and skills required to appropriately sense-make the data, 7) the resources to act on whatever conclusions are drawn from that process, 8) a system that is able to respond to the actions that are needed and taken (and in a timely manner), 9) the personal willpower, energy, and resolve to persist through the various challenges and pushback from the system to resist the actions taken, 10) social support (because this is virtually impossible to do without any support at all) and 11) the motivation and interest in doing all of this in the first place.

Dave deBronkart and his peers are advocating for patient engagement on a broader level and that includes creating spaces for patients to have the choice as to what kind of information they use or not. This also means having choice to NOT have information. It’s not about technology pushing, but having a choice about what to access, when and how. That’s noble and often helpful to those who are used to not having much say in what happens, but that, too has problems of its own.

The paradox of choice

Barry Schwartz’s work (pdf) doing and synthesizing research on consumer decision-making puts truth to this lie that more choice is better. Choice options add value only to a certain point after which they degrade value and even subvert it altogether. The problem is that choice options are often ‘all or nothing’ and may be addictive if left unconstrained as we’ll see below.

Schwartz addresses the matter of decision-making in healthcare in the above video and points to the shifting of responsibility away from experts to everyone. Perhaps it is not surprising that we are seeing an incredible backlash against expert-driven knowledge and science in a way that we’ve not seen in over a hundred years. This is at a time when the public has access to more scientific data — the same data that scientists and other experts have — through open data and open access scientific publications to validate the claims by experts.

As discussed in a previous post, another feature of this wealth of information is that we are now living in what some call a post-truth political climate where almost anything goes. Speaking on the matter of science and climate change former Alaska Governor and Vice Presidential candidate Sarah Palin suggested that, when compared to Dr Bill Nye (the Science Guy and a rocket scientist — yes, a real rocket scientist ), she is as much of a scientist as he is.

Why have science when you can have opinion?

Distracted driving on the information superhighway

Recent data from Canada shows that year-over-year growth in smartphone use at 24% to over two thirds of the population with 85% reporting some form of mobile phone ownership. One of the key features of modern smartphones is the ‘always on’ nature of their tools and alert systems allowing you to bring maps, address books, a digital library, video and audio telephony, and the entire Internet in your pocket.

The distractions that come from the tools meant to deliver information are becoming crippling to some to the point of distancing us from our humanity itself. The title of a beautiful, sad piece in New York Magazine by Andrew Sullivan put this into perspective: I used to be a human being. (We will come back to this in a future post.)

But even if one still feels human using information technology, its a different experience of humanity than it once was. Behaviour change writer and coach Tony Schwartz (I’m not sure if he’s related to Barry), writing in the New York Times magazine, noted how his use of information technology was affecting his ability to, ironically, glean information from something simple as a book.

One evening early this summer, I opened a book and found myself reading the same paragraph over and over, a half dozen times before concluding that it was hopeless to continue. I simply couldn’t marshal the necessary focus.

He goes on to explain what is being exchanged for the books he had aspired to read:

Instead of reading them, I was spending too many hours online, checking the traffic numbers for my company’s website, shopping for more colorful socks on Gilt and Rue La La, even though I had more than I needed, and even guiltily clicking through pictures with irresistible headlines such as “Awkward Child Stars Who Grew Up to Be Attractive.”

We can laugh at the last bit because most of us have been online and lured by something we thought was impossible or ridiculous and had to inquire about. Link bait is not new or particularly clever, but it works. It works for a variety of reasons, but largely because we need to inhabit the same space to work as well as to play. The problem comes when these worlds cross-over into one another.

For example, I recently was shopping for a safe (no, it’s not to store my non-existent millions, but rather protect hard drives and enhance data security) and wanted to return to a story I’d read in the Guardian for a different blog post. As I returned to pull the URL for this I found the page looking like this:

screen-shot-2016-09-21-at-7-25-06-pm

All of a sudden I am confronted with shopping choices amidst a quest for a URL.

Information wealth: A Faustian bargain to knowledge poverty?

“We willingly accept the loss of concentration and focus, the division of our attention and the fragmentation of our thoughts, in return for the wealth of compelling or at least diverting information we receive.”

Tony Shwartz’s comments above and below point to what we know about how information works in our brain. We can try and resist, but the evolutionary reasons we pay attention to things and the biological limitations we have to processing it all are most likely to trump any efforts to resist it without substantial shifts to our practices.

Endless access to new information also easily overloads our working memory. When we reach cognitive overload, our ability to transfer learning to long-term memory significantly deteriorates. It’s as if our brain has become a full cup of water and anything more poured into it starts to spill out.

I wish I had the answers to what these are. Schwartz, has proposed a digital vacation. As beneficial as it was for him, he was also willing to admit that it’s not a perfect strategy and that he still spends too much time online, too distracted. But, its better.

Comedian Louis C.K. has taken to ‘quitting the internet’ altogether and, in a touching moment of reflection (as he often does with wit), notes how it has improved the relationship with his daughters.

It’s these relational aspects of the new information technology and how it impacts our world that concern me the most and creates the most troubling paradox: the tools that are designed to bring us together might just be making it harder to be together and pushing us apart from each other and ourselves. This is what I will look at in the next piece in this series on paradox.

Image credit: Information by Heath Brandon used under Creative Commons License and by author

 

 

complexitydesign thinkingpsychologysocial systemssystems thinking

Collective action, impact, intelligence & stupidity

5226206031_acf74d80cd_b

Collective impact is based largely on the concept that we can do more together than apart, which holds true under the assumption that we can coordinate, organize and execute as a unit. This assumption doesn’t always hold true and the implications for getting it wrong require serious attention. 

Anyone interested in social change knows that they can’t do it alone. Society, after all, is a collective endeavour — even if Margaret Thatcher suggested it didn’t exist.  Thatcherites aside, that is about where agreement ends. Social change is complex, fraught with disagreements, and challenging for even the most skilled organizer because of the multitude of perspectives and disparate spread of individuals, groups and organizations across the system.

Social media (and the Internet more widely) was seen as means of bridging these gaps, bringing people together and enabling them to organize and make social change. Wael Ghonim, one of the inspirational forces behind Egypt’s Arab Spring movement, believed this to be true, saying:

If you want to liberate society all you need is the Internet

But as he acknowledges now, he was wrong.

Ghonim’s beliefs were not illogical as he discusses in the Ted talk above. He espoused a belief about collective action that echoes what leadership consultant and author Ken Blanchard proclaims:

None of us is as smart as all of us

Blanchard’s quote is meant to illustrate the power of teams and working together; something that we can easily take for granted when we seek to do collective action. Yet, what’s often not discussed are the challenges that our new tools present for true systems change.

Complex (social) systems thrive on diversity, the interaction between ideas and the eventual coordination and synchrony between actions into energy. That requires some agreement, strategy and leadership before the change state becomes the new stable state (the changed state). Change comes from a coalescing of perspectives into some form of agreement that can be transformed into a design and then executed. It’s messy, unpredictable, imprecise, can take time and energy, but that is how social change happens. 

At least, that’s how it has happened. How it’s happening now is less clear thanks to social media and it’s near ubiquitous role in social movements worldwide.

Complicating complexity

The same principles underpinning complex social systems hasn’t changed, but what we’re seeing is that the psychology of change and the communications that takes place within those systems is. When one reviews or listens to the stories told about social change movements from history, what we see over and again is the power of stories.

Stories take time to tell them, are open to questions, and can get more powerful in their telling and retelling. They engage us and, because they take time, grant us time to reflect on their meaning and significant. It’s a reason why we see plays, read novels, watch full-length films, and spend time with friends out for coffee…although this all might be happening less and less.

Social media puts pressure on that attention, which is part of the change process. Social media’s short-burst communication styles — particularly with Tweets, Snapchat pictures, Instragram shots and Facebook posts — make it immensely portable and consumable, yet also highly problematic for longer narratives. The social media ‘stream’, something discussed here before, provides a format that tends to confirm our own beliefs and perspectives, not challenge them, by giving us what we want even if that’s not necessarily what we need for social change.

When we are challenged the anonymity, lack of social cues, immediacy, and reach of social media can make it too easy for our baser natures to override our thoughts and lash out. Whether its Wael Ghomim and Egypt’s Arab Spring or Hossein Derakhshan and Iranian citizen political movement or the implosion of the Occupy movement , the voices of constructive dissent and change can be overwhelmed by infighting and internal dissent, never allowing that constructive coalescing of perspective needed to focus change.

Collectively, we may be more likely to reflect one of the ‘demotivation’ posters from Despair instead of Ken Blanchard:

None of us is as dumb as all of us

Social media, the stream and the hive

Ethan Zuckerman of the MIT Media Lab has written extensively about the irony of the social insularity that comes with the freedom and power online social networks introduce as was explored in a previous post.

The strength of a collective impact approach is that it aims to codify and consolidate agreement, including the means for evaluating impact. To this end, it’s a powerful force for change if the change that is sought is of a sufficient value to society and that is where things get muddy. I’ve personally seen many grand collaboratives fall to irrelevancy because the only agreements that participants can come up with are broad plaudits or truisms that have little practical meaning.

Words like “impact”, “excellence”, “innovation” and “systems change” are relatively meaningless if not channeled into a vision that’s attainable through specific actions and activities. The specifics — the devil in the details — comes from discussion, debate, concession, negotiation and reflection, all traits that seem to be missing when issues are debated via social media.

What does this mean for collective impact?

If not this, then what?

This is not a critique of collective activity, because working together is very much like what Winston Churchill said about democracy and it’s failings still making it better than the alternatives. But it’s worth asking some serious questions and researching what collective impact means in practice and how to we engage it with the social tools that are now a part of working together (particularly at a distance). These questions require research and systemic inquiry.

Terms like social innovation laboratories or social labs are good examples of an idea that sounds great (and may very well be so), yet has remarkably little evidence behind it. Collective impact risks falling into the same trap if it is not rigorously, critically evaluated and that the evaluation outcomes are shared. This includes asking the designer’s and systems thinker’s question: are we solving the right problem in the first place? (Or are we addressing some broad, foggy ideal that has no utility in practice for those who seek to implement an initiative?)

Among the reasons brainstorming is problematic is that it fails to account for power and for the power of the first idea. Brainstorming favours those ideas that are put forward first with participants commonly reacting to those ideas, which immediately reduces the scope of vision. A far more effective method is having participants go off and generate ideas independently and then systematically introducing those to the group in a manner that emphasizes the idea, not the person who proposed it. Research suggests it needs to be well facilitated [PDF].

There may be an argument that we need better facilitation of ideas through social media or, perhaps as Wael Ghonim argues, a new approach to social media altogether. Regardless, we need to design the conversation spaces and actively engage in them lest we create a well-intentioned echo chamber that achieves collective nothing instead of collective impact.

Photo credit “All Power to the Collective” to Mike Benedetti used under Creative Commons licence via Flickr (original artist, Graffito)

complexityeducation & learningemergenceevaluationsystems thinking

Developmental Evaluation and Mindfulness

Mindfulness in Motion

Mindfulness in Motion?

Developmental evaluation is focused on real-time decision making for programs operating in complex, changing conditions, which can tax the attentional capacity of program staff and evaluators. Organizational mindfulness is a means of paying attention to what matters and building the capacity across the organization to better filter signals from noise.

Mindfulness is a means of introducing quiet to noisy environments; the kind that are often the focus of developmental evaluations. Like the image above, mindfulness involves remaining calm and centered while everything else is growing, crumbling and (perhaps) disembodied from all that is around it.

Mindfulness in Organizations and Evaluation

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

Mindfulness does not exist independent of the past, rather it takes account of present actions in light of a path to the current context. As simple as it may sound, mindfulness is anything but easy, especially in complex settings with high levels of information sources. What this means for developmental evaluation is that there needs to be a method of capturing data relevant to the present moment, a sensemaking capacity to understand how that data fits within the overall context and system of the program, and a strategy for provoking curiosity about the data to shape innovation. Without attention, sensemaking or interest in exploring the data to innovate there is little likelihood that there will be much change, which is what design (the next step in DE) is all about.

Organizational mindfulness is a quality of social innovation that situates the organization’s activities within a larger strategic frame that developmental evaluation supports. A mindful organization is grounded in a set of beliefs that guide its actions as lived through practice. Without some guiding, grounded models for action an organization can go anywhere and the data collected from a developmental evaluation has little context as nearly anything can develop from that data, yet organizations don’t want anything. They want the solutions that are best optimized for the current context.

Mindfulness for Innovation in Systems

Karl Weick has observed that high-reliability organizations are the way they are because of a mindful orientation. Weick and Karen Sutcliffe explored the concept of organizational mindfulness in greater detail and made the connection to systems thinking, by emphasizing how a mindful orientation opens up the perceptual capabilities of an organization to see their systems differently. They describe a mindful orientation as one that redirects attention from the expected to the unexpected, challenges what is comfortable, consistent, desired and agreed to the areas that challenge all of that.

Weick and Sutcliffe suggest that organizational mindfulness has five core dimensions:

  1. Reluctance to simplify
  2. Sensitivity to operations
  3. Commitment to resilience
  4. Deference to expertise
  5. Preoccupation with failure

Ray, Baker and Plowman (2011) looked at how these qualities were represented in U.S. business schools, finding that there was some evidence for their existence. However, this mindful orientation is still something novel and its overlap with innovation output, unverified. (This is also true for developmental evaluation itself with few published studies illustrating that the fundamentals of developmental evaluation are applied). Vogus and Sutcliffe (2012) took this further and encouraged more research and development in this area in part because of the lack of detailed study of how it works in practice, partly due to an absence of organizational commitment to discovery and change instead of just existing modes of thinking. 

Among the principal reasons for a lack of evidence is that organizational mindfulness requires a substantive re-orientation towards developmental processes that include both evaluation and design. For all of the talk about learning organizations in industry, health, education and social services we see relatively few concrete examples of it in action. A mistake that many evaluators and program planners make is the assumption that the foundations for learning, attention and strategy are all in place before launching a developmental evaluation, which is very often not the case. Just as we do evaluability assessments to see if a program is ready for an evaluation we may wish to consider organizational mindfulness assessments to explore how ready an organization is to engage in a true developmental evaluation. 

Cultivating curiosity

What Weick and Sutcliffe’s five-factor model on organizational mindfulness misses is the second part of the definition of mindfulness introduced at the beginning of this post; the part about curiosity. And while Weick and Sutcliffe speak about the challenging of assumptions in organizational mindfulness, these challenges aren’t well reflected in the model.

Curiosity is a fundamental quality of mindfulness that is often overlooked (not just in organizational contexts). Arthur Zajonc, a physicist, educator and President of the Mind and Life Institute, writes and speaks about contemplative inquiry as a process of employing mindfulness for discovery about the world around us.Zajonc is a scientist and is motivated partly by a love and curiosity of both the inner and outer worlds we inhabit. His mindset — reflective of contemplative inquiry itself — is about an open and focused attention simultaneously.

Openness to new information and experience is one part, while the focus comes from experience and the need to draw in information to clarify intention and actions is the second. These are the same kind of patterns of movement that we see in complex systems (see the stitch image below) and is captured in the sensing-divergent-convergent model of design that is evident in the CENSE Research + Design Innovation arrow model below that.

Stitch of Complexity

Stitch of Complexity

CENSE Corkscrew Innovation Discovery Arrow

CENSE Corkscrew Innovation Discovery Arrow

By being better attuned to the systems (big and small) around us and curiously asking questions about it, we may find that the assumptions we hold are untrue or incomplete. By contemplating fully the moment-by-moment experience of our systems, patterns emerge that are often too weak to notice, but that may drive behaviour in a complex system. This emergence of weak signals is often what shifts systems.

Sensemaking, which we discussed in a previous post in this series, is a means of taking this information and using it to understand the system and the implications of these signals.

For organizations and evaluators the next step is determining whether or not they are willing (and capable) of doing something with the findings from this discovery and learning from a developmental evaluation, which will be covered in the next post in this series that looks at design.

 

References and Further Reading: 

Bishop, S. R., Lau, M., Shapiro, S., & Carlson, L. (2004). Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice, 11(N3), 230–241.

Ray, J. L., Baker, L. T., & Plowman, D. A. (2011). Organizational mindfulness in business schools. Academy of Management Learning & Education, 10(2), 188–203.

Vogus, T. J., & Sutcliffe, K. M. (2012). Organizational Mindfulness and Mindful Organizing : A Reconciliation and Path Forward. Academy of Management Learning & Education, 11(4), 722–735.

Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Wieck, K. E. (1999). Organizing for high reliability: processes of collective mindfulness. In R. S. Sutton & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 1, pp. 81–123). Stanford, CA: Jai Press.

Weick, K.E. & Sutcliffe, K.M. (2007). Managing the unexpected. San Francisco, CA: Jossey-Bass.

Zajonc, A. (2009). Meditation as contemplative inquiry: When knowing becomes love. Barrington, MA: Lindisfarne Books.

complexityeducation & learningevaluationsystems science

Evaluation, Evidence and Moving Beyond the Tyranny of ‘I Think’

I think, you think

The concrete evidence for ‘I think’

Good evidence provides a foundation for decision-making in programs that is dispassionate, comparable and open to debate and view, yet often it is ignored in favour of opinion. Practice-based evidence allows that expert opinion in, however the way to get it into the discussion is through the very means we use to generate traditional evidence. 

Picture a meeting of educators or health practitioners or social workers discussing a case or an issue about a program. Envision those people presenting evidence for a particular decision based on what they know and can find. If they are operating in a complex, dynamic field the chances are that the evidence will be incomplete, but there might be some. Once these studies and cases are presented, invariably the switch comes when someone says “I think...” and offers their opinion.

Incomplete evidence is not useful and complex systems require a very different use of evidence. As Ray Pawson illustrates in his most recent book, science in the realm of complexity requires a type of sensemaking and deliberations that differ greatly from extrapolating findings from simple or even complicated program data.

Practice-based evidence: The education case

Larry Green, a thought leader in health promotion, has been advocating to the health services community that if we want more evidence based practice we need more practice based evidence (video). Green argues that systems science has much to offer by drawing in connections between the program as a system and the systems that the program operates in. He further argues that we are not truly creating evidence-based programs without adding in the practice-based knowledge to the equation.

Yet, practice-based evidence can quickly devolve into “I think” statements that are opinion based on un-reflective bias, personal prejudice, convenience and lack of information. To illustrate, I need only consider a curriculum decision-making process at universities (and likely many primary and secondary schools too). Having been a part of training programs at different institutions — in-house degree programs and multi-centre networks between universities — I can say I’ve rarely seen evidence come into play in decisions about what to teach, how to teach, what is learned and what the purpose of the programs are, yet always see “I think”.

Part of the reason for this is that there is little useful data for designing programs. In post-secondary education we use remarkably crude metrics to assess student learning and progress. Most often, we use some imperfect time-series data like assignments that are aggregated together to form a grade. If this is undergraduate education, most likely we are using multiple-choice exams because they are easier to distribute and grade and use within the resource constraints. For graduate students, we still use exams, but perhaps we use papers as well. But rarely, do we have any process data to make decisions on.

Yet, we recruit students based on quotas and rarely look to where the intellectual and career ‘markets’ are going to set our programs. Instead, we use opinion. “I think we should be teaching X” or “I think we should teach X this way” with little evidence for why these decisions are made. It is remarkable how learning — whether formal or informal — in organizations is left without a clear sense of what the point is. Asking why we teach something, why people need to learn something, and what they are expected to do with that knowledge is a question that is well beyond a luxury. To get a sense of this absurdity, NYC principal Scott Conti’s talk at TEDX Dumbo is worth a watch. He points to the mis-match between what we seek to teach and what we expect from students in their lives.

Going small to go big

There is, as Green points out, a need for a systems approach to understanding the problem of taking these anecdotes and opinion and making them useful. Many of the issues with education have to do with resources and policy directions made at levels well beyond the classroom, which is why a systems approach to evaluation is important.Evaluation applied across the system using a systems approach that takes into account the structures and complexity in that system can be an enormous asset. But how does this work for teachers or anyone who operates at the front line of their profession?

Evaluation can provide the raw materials for discussion in a program. By systematically collecting data on the way we form decisions, design our programs, and make changes we create a layer of transparency and an opportunity to better integrate practice-based evidence more fully. The term “system” in evaluation or programming often invokes a sense of despair due to a perception of size. Yet, systems happen at multiple scales and a classroom and the teaching within it are systems.

One of the best ways to cultivate practice-based evidence is to design evaluations that take into account the way people learn and make decisions. It requires initially using a design-oriented approach to paying attention to how people operate in the classroom — both as teachers and learners. From there, we can match those activities to the goals of the classroom — the larger goals, the ones that ask “what’s the point of people being here” and also what the metrics for assessment are within the culture of the school. Next, consider ways to collect data on a smaller scale through things like reflective practice journals, recorded videos of teaching, observational notes, and markers of significance such as moments of insight, heightened emotion, or decisions.

By capturing the small decisions it is possible to generate practice-based evidence that goes beyond “I think”. It also allows others in. Rather than ideas being formed exclusively in one’s own head, we can illustrate where people’s knowledge comes from and permit greater learning from those around that person. Too often, the talents and tools of great leaders and teachers are accessible only in formal settings — like lectures or discussions — and not evident to others in the fire of everyday practice.

What are you doing to support evaluation at a small scale and allowing others to access your practice-based knowledge to create that practice-based evidence?

References:

Green, L. W. (2006). Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? American Journal of Public Health, 96(3), 406–9.

Green, L. W. (2008). Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Family practice, 25 Suppl 1(suppl_1), i20–4.

Pawson, R. (2013). The science of evaluation: A realist manifesto. London, UK: Sage Publications.

art & designcomplexitydesign thinkingevaluationinnovation

Defining the New Designer

Who is the real designer?

It’s been suggested that anyone who shapes the world intentionally is a designer, however those who train and practice as professional designers question whether such definition obscures the skill, craft, and ethics that come from formal disciplinary affiliation. Further complicating things is the notion that design thinking can be taught and that the practice of design can be applied far beyond its original traditional bounds. Who is right and what does it mean to define the new designer?

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. – Herbert Simon, Scientist and Nobel Laureate

By Herb Simon’s definition anyone who is intentionally directing their energy towards shaping their world is a designer. Renowned design scholar Don Norman (no relation) has said that we are all designers [1]. Defined this way, the term design becomes something more accessible and commonplace removing it from a sense elitism that it has often been associated with. That sounds attractive to most, but is something that has raised significant concerns from those who identify as professional designers and opens the question up about what defines the new designer as we move into an age of designing systems, not just products.

Designer qualities

Design is what links creativity and innovation. It shapes ideas to become practical and attractive propositions for users or customers. Design may be described as creativity deployed to a specific end – Sir George Cox

Sir George Cox, the former head of the UK Design Council, wrote the above statement in the seminal Cox Review of Creativity in Business in the UK in 2005 sees design as a strategic deployment of creative energy to accomplish something. This can be done mindfully, skilfully and ethically with a sense of style and fit or it can be done haphazardly, unethically, incompetently and foolishly. It would seem that designers must put their practice of design thinking into use which includes holding multiple contradictory ideas at the same time in one’s head. Contractions of this sort are a key quality of complexity and abductive reasoning, a means of thinking through such contradictions, is considered a central feature of design thinking.

Indeed, this ‘thinking’ part of design is considered a seminal feature of what makes a designer what they are. Writing on Cox’s definition of design, Mat Hunter, the Design Council’s Chief Design Officer, argues that a designer embodies a particular way of thinking about the subject matter and weaving this through active practice:

Perhaps the most obvious attribute of design is that it makes ideas tangible, it takes abstract thoughts and inspirations and makes something concrete. In fact, it’s often said that designers don’t just think and then translate those thoughts into tangible form, they actually think through making things.

Hunter might be getting closer to distinguishing what makes a designer so. His perspective might assert that people are reflective, abductive and employ design thinking actively, which is an assumption that I’ve found to be often incorrect. Even with professionals, you can instill design thinking but you can’t make them apply it (why this is the case is something for another day).

This invites the question: how do we know people are doing this kind of thinking when they design? The answer isn’t trivial if certain thinking is what defines a designer. And if they are applying design thinking through making does that qualify them as a designer whether they have accreditation or formal training degree in design or not?

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics. Discipline is a cultural code of identity. For the public or consumers of designed products it can also provide some sense of quality assurance to have credentialed designers working with them.

Yet, those who practice what Herb Simon speaks of also are designers of something. They are shaping their world, doing with intent, and many are doing it with a level of skill and attention that is parallel to that of professional designers. So what does it mean to be a designer and how do we define this in light of the new spaces where design is needed?

Designing a disciplinary identity

Writing in Design Issues, Bremner & Rodgers (2013) [2] argue that design’s disciplinary heritage has always been complicated and that its current situation is being affected by three crisis domains: 1) professionalism, 2) economic, and 3) technological. The first is partly a product of the latter two as the shaping and manufacture of objects becomes transformed. Materials, production methods, knowledge, social context and the means of transporting – whether physically or digitally — the objects of design have transformed the market for products, services and ideas in ways that have necessarily shaped the process (and profession) of design itself. They conclude that design is not disciplinary, interdisciplinary or even transdisciplinary, but largely alterdisciplinary — boundless of time and space.

Legendary German designer Dieter Rams is among the most strident critics of the everyone-is-a-designer label and believes this wide use of the term designer takes away the seriousness of what design is all about. Certainly, if one believes John Thackara’s assertion that 80 per cent of the impact of any product is determined at the design stage the case for serious design is clear. Our ecological wellbeing, social services, healthcare, and industries are all designed and have enormous impact on our collective lives so it makes sense that we approach designing seriously. But is reserving the term design(er) for an elite group the answer?

Some have argued that elitism in design is not a bad idea and that this democratization of design has led to poorly crafted, unusable products. Andrew Heaton, writing about User Experience (UX) design, suggests that this elitist view is less about moral superiority and more about better products and greater skill:

I prefer this definition: elitism is the belief that some individuals who form an elite — a select group with a certain intrinsic quality, specialised training, experience and other distinctive attributes — are those whose views on a matter are to be taken the most seriously or carry the most weight. By that definition, Elitist UX is simply an insightful and skilled designer creating an experience for an elevated class of user.

Designing through and away from discipline

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics, but there is little concrete evidence that it produced better designed outcomes. Design thinking has enabled journalists like Helen Walters, healthcare providers like those at the Mayo Clinic, and business leaders to take design and apply it to their fields and beyond. Indeed, it was business-journalist-cum-design-professor Bruce Nussbaum who is widely credited with contributing to the cross-disciplinary appeal of design thinking (and its critique) from his work at Newsweek.

Design thinking is now something that has traversed whatever discipline it was originally rooted in — which seems to be science, design, architecture, and marketing all at the same time. Perhaps unlocking it from discipline and the practices (and trappings) of such structure is a positive step.

Discipline is a cultural code of identity and for the public it can be a measure of quality. Quality is a matter of perspective and in complex situations we may not even know what quality means until products are developmentally evaluated after being used. For example, what should a 3-D printed t-shirt feel like? I don’t know whether it should be like silk, cotton, polyester, or nylon mesh or something else entirely because I have never worn one and if I was to compare such a shirt to my current wardrobe I might be using the wrong metric. We will soon be testing this theory with 3-D printed footwear already being developed.

Evaluating good design

The problem of metrics is the domain of evaluation. What is the appropriate measurement for good design? Much has been written on the concept of good design, but part of the issue is that what constitutes good design for a bike and chair might be quite different for a poverty reduction policy or perhaps a program to support mothers and their children escaping family violence. The idea of delight (a commonly used goal or marker of good design) as an outcome might be problematic in the latter context. Should mothers be delighted at such a program to support them in time of crisis? That’s a worthy goal, but I think if those involved feel safe, secure, cared for, and supported in dealing with their challenges that is still a worthwhile design. Focusing on delight as a criteria for good design in this case is using the wrong metric. And what about the designers who bring about such programs?

Or should such a program be based on designer’s ability to empathize with users and create adaptive, responsive programs that build on evidence and need simultaneously without delight being the sole goal? Just as healthy food is not always as delightful for children as ice cream or candy, there is still a responsibility to ensure that design outcomes are appropriate. The new designer needs to know when to delight, when and how to incorporate evidence, and how to bring all of the needs and constraints together to generate appropriate value.

Perhaps that ability is the criteria for which we should judge the new designer, encouraging our training programs, our clients (and their asks and expectations), our funders and our professional associations to consider what good design means in this age of complexity and then figure out who fits that criteria. Rather than build from discipline, consider creating the quality from the outcomes and processes used in design itself.

[1] Norman, D. (2004) Emotional design: why we love (or hate) everyday things. New York, NY: Basic Books.

[2] Bremner, C. & Rodgers, P. (2013). Design without discipline. Design Issues, 29 (3), 4-13.

complexitysocial systems

Common Sense, Complexity and Leadership

Bye, Bye Common Sense

Great leaders are often ascribed traits that include ample common sense. But what passes for common sense is often a grab bag of miscellaneous, inconsistent ideas that are context dependent and less useful in the complex environments where leadership is called for most. 

common sense |ˌkɑmən ˈsɛns|

noun

good sense and sound judgment in practical matters: use your common sense | [ as modifier ] : a common-sense approach.

Today Research in Motion announced that its founder Mike Lazaridis and his co-CEO Jim Balsillie would be relinquishing their roles with the company. In their place, a ‘pragmatic, operational-type guy ‘was installed. Presumably, Thorsten Heins has the common sense to lead RIM after the founders lost theirs. Yet, the pragmatic, common sense that RIM is looking for might not be what they need given the complexity of the environment they are leading in.

Common sense is a false lure in complex systems. In his recent book, Everything is Obvious *Once You Know the Answer, social network researcher  and Yahoo! Research scientist Duncan Watts eloquently critiques the concept of common sense, illustrating dozens of times over how “common sense” doesn’t fare so well in decisions that go beyond the routine and into the complex. Indeed. the very definition of the term implies that the problems that common sense works towards addressing are relatively simple and pragmatic.

Certainly, navigating daily social conventions might lend itself well to what we might call common sense. Watts refers to sociologist Harry Collins’ term ‘collective tacit knowledge‘ that is encoded in social norms, customs and practices of a particular world to describe common sense. However, what becomes common is a byproduct of many small decisions, dynamic and flexible changes to perspective, an accumulation of knowledge gained from small experiments over time, and the application of all of this knowledge to particular, context-dependent, situations. This constellation of factors and its interdependent, contextual overlap is why artificial intelligence systems have such a difficult time mimicking human thought and action. It is this attention to context that is most worth noting for it is this context that keeps common sense from being anything but common:

Common sense…is not so much a worldview as a grab bag of logically inconsistent, often contradictory beliefs, each of which seems right at the time but carries no guarantee of being right any other time.

Watts goes on to argue:

Commonsense reasoning, therefore, does not suffer from a single overriding limitation but rather from a combination of limitations, all of which reinforce and even disguise one another. The net result is that common sense is wonderful at making sense of the world, but not necessarily at understanding it.

Thus, we often concoct a narrative about the way something happens that sounds plausible, rational and be completely wrong. Throughout the book, Watts shows how often mistakes are made based on this common sense approach to solving problems.

When it comes to RIM, some have pointed to the late Steve Jobs’ assertion that they would have difficulty catching up to firms like Apple given that the consumer market is not their strength, the enterprise market is. Yet, Steve Jobs didn’t let the fact that Apple was a computer company stop him from making music players (the iPod), mobile phones (the iPhone) or becoming book, music and movie vendors (iTunes). A read of Steve Jobs’ biography by Walter Isaacson reveals a man who was able to lead and be successful through what appeared to be common sense, yet was decidedly uncommon among media and technology leaders. That is why Apple is where it is and why so many other technology companies lag behind them or simply disappeared.

The reason is that common sense in leadership looks as simple in hindsight only, not in foresight or even in the present moment. This is one of the big points that Watts makes. He uses the example of Sony’s MiniDisc system that, when introduced, had all of the hallmark features of the innovations that Apple introduced (novel, high quality, portable, smaller, visible advantages over the alternatives), yet it was a spectacular failure. Canadian management consultant Michael Raynor has called this the strategy paradox. When qualities such as vision, bold leadership, and focused execution — all the commonsensical aspects of great leaders — are applied to organizations it can lead to great success (Steve Jobs and Apple) or resounding failures (RIM?).

Strategic flexibility, making small adjustments consistently, and imaging scenarios for the future in an ongoing manner are some of the potential ways to limit the damage from common sense (or use its advantages more fully). This requires feedback mechanisms and close monitoring of program activities, developmental evaluation, and a willingness to tweak programs and design on the go (what I call: developmental design) . It’s not a surprise that this incremental approach to development is consistent with the way change is best produced in a complex adaptive system.

By recognizing that common sense is less than common and is certainly not consistent, program designers, developers, evaluators and other professionals will be better positioned to provide true leadership that addresses challenges and complexity rather than adds to the complexity and creates more problems.

Photo: Goodbye to Common Sense Space by Amulet Dream from Deviant Art