Category: emergence

behaviour changecomplexityemergenceevaluationsocial systems

International Women’s Day, Every Day

IMG_4693

Today is International Women’s Day when the attention of one half of the world’s population is brought to the entire world, suggesting that maybe this day is best honoured the other 364 days as well. Time to consider how this might look. 

People worldwide will be celebrating and honouring women as part of International Women’s Day (#IWD2107) and it’s hard to conceive of any issue that is more worthy of such recognition. The theme for this year’s day is Be Bold for Change (#BeBoldForChange) with a variety of resources and promotional campaigns set up to raise awareness of women’s issues worldwide; support women and men in advocating for positive, healthy change around sex and gender-based discrimination; and creating a climate of positive human development for everyone, worldwide.

Depending on your perspective, this celebration of women worldwide on International Women’s Day is either something to be cherished or viewed with discouraged puzzlement — and both reflect the enormity of the issues that women face.

Women make up more than half of the globe’s population, are most often charged with raising children, represent the highest percentage of caregivers in most societies, and yet are systematically excluded (at worst) or badly included (at best) from many of the levers of power to enable them to sit on par with men on many issues that matter to women. A simple and depressing Google image search of Fortune 500 CEOs will find a white male wall of images that would almost suggest that a woman’s presence is there by mistake. If this is the starting place, the end is surely worse.

The puzzlement comes not from celebrating women, rather from the fact that we still need a place to do it because it’s not part of the fabric of everyday life for far too many, despite it being 2017 (two years past 2015 as Canadian Prime Minister Justin Trudeau remarked on his gender-balanced cabinet appointments). Just as Black Lives Matter is a necessary statement (and movement) because, for many, the lives of black people are treated as if they don’t matter, we need to celebrate women because they are too often treated as if they are the furthest thing from celebration-worthy.

It’s not elsewhere

A look through many of the various campaigns and promotional material looking at advancing gender equity will find a very visible presence of images and foci on the developing world. While it most certainly the case that women in these regions face considerable gender-based disadvantages the emphasis on the ‘other’ parts of the world can take our attention away from what is happening closer to home. In Canada, the wage gap between men and women has actually increased in recent years, with women earning 72% of what men do.

Not only do women earn less, but they make far less gains than men and this is made even more so if you belong to a racial minority. According to the Institute for Women’s Policy Research in the United States based on the current rate of change, a white women will need to wait until 2059 to achieve pay equity. For Black women, this stretches to the absurd 2124 and for Hispanic women it’s at the incomprehensible 2248. Yes. some women in the United States will need to wait for 230+ years to see their pay equal than of a man at the current rate of change.

And this is just on matters of pay. The issues women face are far deeper and beyond comment in one simple article such as this.

The point is that women are systematic disadvantaged everywhere and the solution space needs to take a systems perspective if there is any hope of making meaningful progress and making the rate of change something better than expecting something different in 10 generations from now. One of the best ways to ensure that women succeed is to engage the other half of the system: men.

Areas of action: the role of men

It has been heartening to see an outpouring of support for #IWD2017 from men, something that is notably different from past years. Too often celebrations of diversity, resistance, or change involve the group most disadvantaged, but not enough from those whose power is challenged, yet whose involvement is necessary for systemic change to happen. Men need to play a big role in the change. This is not a ‘what about me’ kind of statement from men, but a realistic assertion that systems change cannot take place without engagement of the different parts in the system and that means involving both sexes in the change process.

The matter of violence against women is one of the areas where men’s involvement is critical and starting to attract greater engagement from men. One of the attractors used to draw men to this issue is through sport. Breakaway is a soccer (football) themed online game designed for boys (and girls) aged 8-15 to educate and illustrate issues of gender-based violence. Using sport and the things that boys are interested in (like video games, playing with friends) is a clever means of upending the usual approach of simply telling people about the harms associated with gender-based violence and hoping something changes.

In Canada, many of the Canadian Football League teams have programs aimed at their fans to raise awareness of and prevent violence against women. This is providing a more constructive counter to the horrible displays of gender-based violence from football players in the National Football League in the United States in recent years. Games like Breakaway and the integration of sport leaders into the conversation starts to change the dialogue around who commits violence, what the norms are around violence, and provide positive examples for young men to follow in living a life violence-free.

Changing the narrative: A systems perspective

The matter of women’s rights, freedoms, and opportunities is not simply solved due to the conflation of social, economic, geographic, and historical factors that have shaped the institutions and norms that surround sex and gender-based discrimination. That knotting up of issues is the hallmark of a complex system and thus, if we are to make substantive progress for women (and humanity, at large) on these issues the matter is better served by taking a systems approach. A great place to start is recognizing the complexity of the matter.

Attractors are forces that draw in (or repel) energy — attention, information, enthusiasm, focus, commitment, and more — and finding those that will attract both men and women (whether together or apart) to women’s issues is key. The use of sport and games as a means of attracting men is one example. Many men and boys engage in sport for creativity, recreation, social connection, and skill development and channeling those positive qualities toward inclusion of, respect for, and support of women and their rights is one way to scaffold from one issue to the next.

Engagement of thought leaders, opinion leaders and micro-influencers can also be a tool by shifting the norms, content and tenor of the discussion. These individuals are those that are on the pulse of trends, reflecting social aspirations, or simply provide direct means to cut through the clutter of the mediasphere to deliver a message. This is not just about celebrities, but those who are listened to. This amplifies a positive attractor within the system and draws more men (and women) into constructive conversations and actions.

An attractor-based approach to systems change also requires engagement of diversity within that system. This is another reason to consider the micro-influencer: someone who is a big deal in a small(ish) social space. These might be people on Instagram or within a community of practice or a local champion that has a committed, devoted following or engaged audience. These influencers speak to niche populations, issues, contexts and media forms that resonate with small segments of the population, deeply. That deeper engagement is what will propel people to make substantive changes in their behaviour, speak out, and further push change forward rather than a wide, thin engagement strategy. This last point speaks to the role of evaluation in all of this.

Evaluating the revolution

Social change is only thus because something happened that was different than was before. The only way to tell if the present is different than the past is to evaluate (compare) and potentially to attribute what happened to something that was done. But evaluation is more than social accounting, it’s also about gathering and using information to make things better and more impactful as things unfold. We don’t want to wait until 2059 to see if whatever efforts were put in place today will lead some women to pay equity. We might (and hopefully do) want to see things amplified so that this target date is brought closer to us.

The way to do this is to develop an evaluation strategy that clearly describes what is happening, what efforts are being developed and employed to support change, articulate a theory of change, and then create a series of strategic data collection measures (*that might not all be quantitative) that can be deployed at a system level and various smaller levels within the system to monitor and evaluate what kind of change we are producing. This allows us to ensure that whatever positive attractors we have are amplified and reinforced and those that are negative are disposed of or dampened. This can only be done if we have the feedback mechanisms in place and that is what evaluation delivers.

As we recognize the strengths and wonders that women bring to this world every day and the struggles they face, let’s consider how we can build on this energy and create attractors that can last beyond a day, a month or season to being something that is part of the fabric of life every day. That would be truly something to celebrate.

 

 

 

emergencepsychologysocial mediasocial systems

Mental health: talking and listening

TakingaStandAgainstStigma_Snapseed

Innovation might be doing things different to produce value, but there’s little value if we as a society are not able to embrace change because we’re hiding from mental illness either as individuals, organizations or communities. Without wellbeing and the space to acknowledge when we don’t have it any new product, idea or opportunity will be wasted, which is why mental health promotion is something that we all need to care about. 

Today is Bell Let’s Talk Day in Canada. Its (probably) the most visible national day of mental health promotion in the world. The reason has much to do with the sponsor, Bell Canada, who happens to be one of the country’s major providers of wireless telecommunications, Internet, and television services in addition to owning many entertainment outlets like cable channels, sports teams and radio stations. But this is not about Bell**, but the issue behind Let’s Talk Day: ending mental health stigma.

Interestingly perhaps, the line from the film and novel Fight Club that is most remembered is also the one that is quite fitting for the topic of mental health (particularly given the story):

First rule of Fight Club: Don’t talk about Fight Club.

Mental health stigma is a vexing social problem because it’s about an issue that is so incredibly common and yet receives so little attention in the public discourse.

The Mental Health Commission of Canada‘s aggregation of the data provide a useful jumping off point:

  • In any given year, one in five people in Canada experiences a mental health problem or illness bringing a cost to the economy of more than $50 billion;
  • Up to 70 per cent of adults with a mental health problem report having had one in childhood;
  • Mental health was the reason for nearly half of all disability-related claims by the Canadian public service in 2010, double what it was in 1990;
  • Mental health problems and illnesses account for over $6 billion in lost productivity costs due to absenteeism and presenteeism;
  • Among our First Nations, youth are 5-6 times more likely to die at their own hands than non-Aboriginal youth and for Inuit, the suicide rate is 11 times the national average;
  • Improving a child’s mental health from moderate to high has been estimated to save society more than $140,000 over their lifetime;

And this is just Canada. Consider what it might look like where you live.

It seems preposterous that, with numbers this high and an issue so prevalent that it is not commonly spoken of, yet that is the case. Mental illness is still the great ‘secret’ in society and yet our mental wellbeing is critical to our success on this planet.

Like with many vexing problems, the place for change to start is by listening.

Mind over matter: Dr. Paul Antrobus

Last year one of the most incredible human beings I’ve ever — or will ever — meet passed away. Dr. Paul Antrobus was the man who introduced to me psychology and was the wisest person I’ve ever known. Paul was not only among the greatest psychologists who’ve ever lived (I say with no exaggeration) by means of his depth of knowledge of the field and his ability to practice it across cultures, but he was also someone who could embody what mental health was all about.

In 2005 Paul fell off the roof of his cottage and was left as a paraplegic, requiring ventilation to breathe. For nearly anyone this would have been devastating to their very being, yet for Paul he managed to retain his humour, compassion and intellect as well as sharp wit and engagement and put it on display soon after his accident. He demonstrated to me the power of the mind and consciousness over the body both in the classroom and, after the accident, in his wheelchair.

Paul lived a good life, by design. He surrounded himself with family and friends, built a career where he was challenged and stimulated and provided enough basics for life, and gave back to his community and to hundreds of students whom he mentored and taught. Much of this was threatened with the accident, yet he continued on, illustrating how much potential we have for healing. He learned by listening to his life what he needed and when he needed it, tried things out, evaluated, tinkered and persisted. In essence: he was a designer.

Paul would also be the first to say that healing is a product of many things — biology (like genes), personality, family upbringing, access to resources (human, financial, spiritual, intellectual), and community systems of support. He made the most of all of these and, partly because of his access to resources as part of being a professor of psychology, was able to cultivate positive and strong mental health while helping others do the same. Although he might not have used the term ‘designer’, that’s exactly what he was. One of the reasons was that he discovered how to listen to his life and that of others.

And because he was able to listen to others he recognized that nearly everyone had the potential for great health, but that such potential was always couched within systems that worked for or against people. Of all of the things that contribute to healing, a healing community had the potential to allow people to overcome nearly any problem associated with the other factors. Yet, it is the community — and their attitudes toward health (and mental health in particular) — that requires the greatest amount of change.

That’s why talking and listening is so important. It creates community.

Listening to your life

Paul wrote a book and taught a course on listening to one’s life. Part of that approach is also being able to share what your life is teaching you and listening to what your body and the world is telling you. For something like Bell Let’s Talk Day, a space is created to share — Tweet, text, post — stories of suffering, hope, recovery, support, love and questioning about mental health without fear. It’s a single day and part of a corporate-led campaign, but the size and scope of it make it far safer and ‘normal’ on this day than on almost any day I know of.

A couple of years ago a colleague disclosed to the world that she had struggled with depression via Twitter on Bell Let’s Talk Day. She was so taken by the chance to share something that, on any other day, would seem to be ‘oversharing’ or inappropriate or worse, that she opened up and, thankfully, many others listened.

Let’s Talk Day is about designing the conversation around mental health by creating the space for it to take place and allowing ideas and issues to emerge. This is the kind emergent conditions that systems change designers seek to create and if you want to see it in action, follow #BellLetsTalk online or find your own space to talk wherever you are and to listen and to design for one of the greatest social challenges of our time.

This post is not about innovation, but rather the very foundation in which innovation and discovery rests: our mental health and wellbeing. For without those, innovation is nothing.

Today, listen to your life and that of others and consider what design considerations are necessary to promote positive mental health and the creative conditions to excel and innovate.

As for some tips in speaking out and listening in, consider these five things to promote mental health where you are today:

  • Language matters – pay attention to the words you use about mental illness.
  • Educate yourself – learn, know and talk more, understand the signs of distress and mental illness.
  • Be kind – small acts of kindness speak a lot.
  • Listen and ask – sometimes it’s best to just listen.
  • Talk about it – start a dialogue, break the silence

Thank you for listening.

** I have no affiliation with Bell or have any close friends or family who work for Bell (although they are my mobile phone provider, if that counts as a conflict of interest).

complexityemergenceevaluationinnovation

Do you value (social) innovation?

Do You Value the Box or What's In It?

Do You Value the Box or What’s In It?

The term evaluation has at its root the term value and to evaluate innovation means to assess the value that it brings in its product or process of development. It’s remarkable how much discourse there is on the topic of innovation that is devoid of discussion of evaluation, which begs the question: Do we value innovation in the first place?

The question posed above is not a cheeky one. The question about whether or not we value innovation gets at the heart of our insatiable quest for all things innovative.

Historical trends

A look at Google N-gram data for book citations provides a historical picture of how common a particular word shows up in books published since 1880. Running the terms innovation, social innovation and evaluation through the N-gram software finds some curious trends. A look at graphs below finds that the term innovation spiked after the Second World War. A closer look reveals a second major spike in the mid-1990s onward, which is likely due to the rise of the Internet.

In both cases, technology played a big role in shaping the interest in innovation and its discussion. The rise of the cold war in the 1950’s and the Internet both presented new problems to find and the need for such problems to be addressed.

Screenshot 2014-03-05 13.29.28 Screenshot 2014-03-05 13.29.54 Screenshot 2014-03-05 13.30.13

Below that is social innovation, a newer concept (although not as new as many think), which showed a peak in citations in the 1960’s and 70s, which corresponds with the U.S. civil rights movements, expansion of social service fields like social work and community mental health, anti-nuclear organizing, and the environmental movement.  This rise for two decades is followed by a sharp decline until the early 2000’s when things began to increase again.

Evaluation however, saw the most sustained increase over the 20th century of the three terms, yet has been in decline ever since 1982. Most notable is the even sharper decline when both innovation and social innovation spiked.

Keeping in mind that this is not causal or even linked data, it is still worth asking: What’s going on? 

The value of evaluation

Let’s look at what the heart of evaluation is all about: value. The Oxford English Dictionary defines value as:

value |ˈvalyo͞o|

noun

1 the regard that something is held to deserve; the importance, worth, or usefulness of something: your support is of great value.

• the material or monetary worth of something: prints seldom rise in value | equipment is included up to a total value of $500.

• the worth of something compared to the price paid or asked for it: at $12.50 the book is a good value.

2 (values) a person’s principles or standards of behavior; one’s judgment of what is important in life: they internalize their parents’ rules and values.

verb (values, valuing, valued) [ with obj. ]

1 estimate the monetary worth of (something): his estate was valued at $45,000.

2 consider (someone or something) to be important or beneficial; have a high opinion of: she had come to value her privacy and independence.

Innovation is a buzzword. It is hard to find many organizations who do not see themselves as innovative or use the term to describe themselves in some part of their mission, vision or strategic planning documents. A search on bookseller Amazon.com finds more than 63,000 titles organized under “innovation”.

So it seems we like to talk about innovation a great deal, we just don’t like to talk about what it actually does for us (at least in the same measure). Perhaps, if we did this we might have to confront what designer Charles Eames said:

Innovate as a last resort. More horrors are done in the name of innovation than any other.

At the same time I would like to draw inspiration from another of Eames’ quotes:

Most people aren’t trained to want to face the process of re-understanding a subject they already know. One must obtain not just literacy, but deep involvement and re-understanding.

Valuing innovation

Innovation is easier to say than to do and, as Eames suggested, is a last resort when the conventional doesn’t work. For those working in social innovation the “conventional” might not even exist as it deals with the new, the unexpected, the emergent and the complex. It is perhaps not surprising that the book Getting to Maybe: How the World is Changed is co-authored by an evaluator: Michael Quinn Patton.

While Patton has been prolific in advancing the concept of developmental evaluation, the term hasn’t caught on in widespread practice. A look through the social innovation literature finds little mention of developmental evaluation or even evaluation at all, lending support for the extrapolation made above. In my recent post on Zaid Hassan’s book on social laboratories one of my critique points was that there was much discussion about how these social labs “work” with relatively little mention of the evidence to support and clarify the statement.

One hypothesis is that evaluation can be seen a ‘buzzkill’ to the buzzword. It’s much easier, and certainly more fun, to claim you’re changing the world than to do the interrogation of one’s activities to find that the change isn’t as big or profound as one expected. Documentation of change isn’t perceived as fun as making change, although I would argue that one is fuel for the other.

Another hypothesis is that there is much mis-understanding about what evaluation is with (anecdotally) many social innovators thinking that its all about numbers and math and that it misses the essence of the human connections that support what social innovation is all about.

A third hypothesis is that there isn’t the evaluative thinking embedded in our discourse on change, innovation, and social movements that is aligned with the nature of systems and thus, people are stuck with models of evaluation that simply don’t fit the context of what they’re doing and therefore add little of the value that evaluation is meant to reveal.

If we value something, we need to articulate what that means if we want others to follow and value the same thing. That means going beyond lofty, motherhood statements that feel good — community building, relationships, social impact, “making a difference” — and articulating what they really mean. In doing so, we are better position to do more of what works well, change what doesn’t, and create the culture of inquiry and curiosity that links our aspirations to our outcomes.

It means valuing what we say we value.

(As a small plug: want to learn more about this? The Evaluation for Social Innovation workshop takes this idea further and gives you ways to value, evaluate and communicate value. March 20, 2014 in Toronto).

 

 

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.

complexityeducation & learningemergenceevaluationsystems thinking

Developmental Evaluation and Mindfulness

Mindfulness in Motion

Mindfulness in Motion?

Developmental evaluation is focused on real-time decision making for programs operating in complex, changing conditions, which can tax the attentional capacity of program staff and evaluators. Organizational mindfulness is a means of paying attention to what matters and building the capacity across the organization to better filter signals from noise.

Mindfulness is a means of introducing quiet to noisy environments; the kind that are often the focus of developmental evaluations. Like the image above, mindfulness involves remaining calm and centered while everything else is growing, crumbling and (perhaps) disembodied from all that is around it.

Mindfulness in Organizations and Evaluation

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

Mindfulness does not exist independent of the past, rather it takes account of present actions in light of a path to the current context. As simple as it may sound, mindfulness is anything but easy, especially in complex settings with high levels of information sources. What this means for developmental evaluation is that there needs to be a method of capturing data relevant to the present moment, a sensemaking capacity to understand how that data fits within the overall context and system of the program, and a strategy for provoking curiosity about the data to shape innovation. Without attention, sensemaking or interest in exploring the data to innovate there is little likelihood that there will be much change, which is what design (the next step in DE) is all about.

Organizational mindfulness is a quality of social innovation that situates the organization’s activities within a larger strategic frame that developmental evaluation supports. A mindful organization is grounded in a set of beliefs that guide its actions as lived through practice. Without some guiding, grounded models for action an organization can go anywhere and the data collected from a developmental evaluation has little context as nearly anything can develop from that data, yet organizations don’t want anything. They want the solutions that are best optimized for the current context.

Mindfulness for Innovation in Systems

Karl Weick has observed that high-reliability organizations are the way they are because of a mindful orientation. Weick and Karen Sutcliffe explored the concept of organizational mindfulness in greater detail and made the connection to systems thinking, by emphasizing how a mindful orientation opens up the perceptual capabilities of an organization to see their systems differently. They describe a mindful orientation as one that redirects attention from the expected to the unexpected, challenges what is comfortable, consistent, desired and agreed to the areas that challenge all of that.

Weick and Sutcliffe suggest that organizational mindfulness has five core dimensions:

  1. Reluctance to simplify
  2. Sensitivity to operations
  3. Commitment to resilience
  4. Deference to expertise
  5. Preoccupation with failure

Ray, Baker and Plowman (2011) looked at how these qualities were represented in U.S. business schools, finding that there was some evidence for their existence. However, this mindful orientation is still something novel and its overlap with innovation output, unverified. (This is also true for developmental evaluation itself with few published studies illustrating that the fundamentals of developmental evaluation are applied). Vogus and Sutcliffe (2012) took this further and encouraged more research and development in this area in part because of the lack of detailed study of how it works in practice, partly due to an absence of organizational commitment to discovery and change instead of just existing modes of thinking. 

Among the principal reasons for a lack of evidence is that organizational mindfulness requires a substantive re-orientation towards developmental processes that include both evaluation and design. For all of the talk about learning organizations in industry, health, education and social services we see relatively few concrete examples of it in action. A mistake that many evaluators and program planners make is the assumption that the foundations for learning, attention and strategy are all in place before launching a developmental evaluation, which is very often not the case. Just as we do evaluability assessments to see if a program is ready for an evaluation we may wish to consider organizational mindfulness assessments to explore how ready an organization is to engage in a true developmental evaluation. 

Cultivating curiosity

What Weick and Sutcliffe’s five-factor model on organizational mindfulness misses is the second part of the definition of mindfulness introduced at the beginning of this post; the part about curiosity. And while Weick and Sutcliffe speak about the challenging of assumptions in organizational mindfulness, these challenges aren’t well reflected in the model.

Curiosity is a fundamental quality of mindfulness that is often overlooked (not just in organizational contexts). Arthur Zajonc, a physicist, educator and President of the Mind and Life Institute, writes and speaks about contemplative inquiry as a process of employing mindfulness for discovery about the world around us.Zajonc is a scientist and is motivated partly by a love and curiosity of both the inner and outer worlds we inhabit. His mindset — reflective of contemplative inquiry itself — is about an open and focused attention simultaneously.

Openness to new information and experience is one part, while the focus comes from experience and the need to draw in information to clarify intention and actions is the second. These are the same kind of patterns of movement that we see in complex systems (see the stitch image below) and is captured in the sensing-divergent-convergent model of design that is evident in the CENSE Research + Design Innovation arrow model below that.

Stitch of Complexity

Stitch of Complexity

CENSE Corkscrew Innovation Discovery Arrow

CENSE Corkscrew Innovation Discovery Arrow

By being better attuned to the systems (big and small) around us and curiously asking questions about it, we may find that the assumptions we hold are untrue or incomplete. By contemplating fully the moment-by-moment experience of our systems, patterns emerge that are often too weak to notice, but that may drive behaviour in a complex system. This emergence of weak signals is often what shifts systems.

Sensemaking, which we discussed in a previous post in this series, is a means of taking this information and using it to understand the system and the implications of these signals.

For organizations and evaluators the next step is determining whether or not they are willing (and capable) of doing something with the findings from this discovery and learning from a developmental evaluation, which will be covered in the next post in this series that looks at design.

 

References and Further Reading: 

Bishop, S. R., Lau, M., Shapiro, S., & Carlson, L. (2004). Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice, 11(N3), 230–241.

Ray, J. L., Baker, L. T., & Plowman, D. A. (2011). Organizational mindfulness in business schools. Academy of Management Learning & Education, 10(2), 188–203.

Vogus, T. J., & Sutcliffe, K. M. (2012). Organizational Mindfulness and Mindful Organizing : A Reconciliation and Path Forward. Academy of Management Learning & Education, 11(4), 722–735.

Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Wieck, K. E. (1999). Organizing for high reliability: processes of collective mindfulness. In R. S. Sutton & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 1, pp. 81–123). Stanford, CA: Jai Press.

Weick, K.E. & Sutcliffe, K.M. (2007). Managing the unexpected. San Francisco, CA: Jossey-Bass.

Zajonc, A. (2009). Meditation as contemplative inquiry: When knowing becomes love. Barrington, MA: Lindisfarne Books.

complexitydesign thinkingemergenceevaluationsystems thinking

Developmental Evaluation and Sensemaking

Sensing Patterns

Sensing Patterns, Seeing Pathways

Developmental evaluation is only as good as the sense that can be made from the data that is received. To assume that program staff and evaluators know how to do this might be one of the reasons developmental evaluations end up as something less than they promise. 

Developmental Evaluation (DE) is becoming a popular subject in the evaluation world. As we see greater recognition of complexity as a factor in program planning and operations and what it means for evaluations it is safe to assume that developmental evaluation will continue to attract interest from program staff and evaluation professionals alike.

Yet, developmental evaluation is as much a mindset as it is a toolset and skillset; all of which are needed to do it well. In this third in a series of posts on developmental evaluation we look at the concept of sensemaking and its role in understanding program data in a DE context.

The architecture of signals and sense

Sensemaking and developmental evaluation involve creating an architecture for knowledge,  framing the space for emergence and learning (boundary specification), extracting the shapes and patterns of what lies within that space, and then working to understand the meaning behind those patterns and their significance for the program under investigation. A developmental evaluation with a sensemaking component creates a plan for how to look at a program and learn from what kind of data is generated in light of what has been done and what is to be done next.

Patterns may be knowledge, behaviour, attitudes, policies, physical structures, organizational structures, networks, financial incentives or regulations. These are the kinds of activities that are likely to create or serve as attractors within a complex system.

To illustrate, architecture can be both a literal and figurative term. In a five-year evaluation and study of scientific collaboration at the University of British Columbia’s Life Sciences Institute, my colleagues Tim Huerta, Alison Buchan and Sharon Mortimer and I explored many of these multidimensional aspects of the program* / institution and published our findings in the American Journal of Evaluation and Research Evaluation journals. We looked a spatial configurations by doing proximity measurements that connected where people work to whom they work with and what they generated. Research has indicated that physical proximity makes a difference to collaboration (E.g.,: Kraut et al., 2002). There is relatively little concrete evaluation on the role of space in collaboration, mostly just inferences from network studies (which we also conducted). Few have actually gone into the physical areas and measured distance and people’s locations.

Why mention this? Because from a sense-making perspective those signals provided by the building itself had an enormous impact on the psychology of the collaborations, even if it was only a minor influence on the productivity. The architecture of the networks themselves was also a key variable that went beyond simple exchanges of information, but without seeing collaborations as networks it is possible that we would have never understood why certain activities produced outcomes and others did not.

The same thing exists with cognitive architecture: it is the spatial organization of thoughts, ideas, and social constructions. Organizational charts, culture, policies, and regulations all share in the creation of the cognitive architecture of a program.

Signals and noise

The key is to determine what kind of signals to pay attention to at the beginning. And as mentioned in a previous post, design and design thinking is a good precursor and adjunct to an evaluation process (and, as I’ve argued before and will elaborate on, is integral to effective developmental evaluation). Patterns could be in almost anything and made up of physical, psychological, social and ‘atmospheric’ (org and societal environmental) data.

This might sound a bit esoteric, but by viewing these different domains through an eye of curiousity, we can see patterns that permit evaluators to measure, monitor, observe and otherwise record to use as substance for programs to make decisions based on. This can be qualitative, quantitative, mixed-methods, archival and document-based or some combination. Complex programs are highly context-sensitive, so the sense-making process must include diverse stakeholders that reflect the very conditions in which the data is collected. Thus, if we are involving front-line worker data, then they need to be involved.

The manner in which this is done can be more or less participatory and involved depending on resources, constraints, values and so forth, but there needs to be some perspective taking from these diverse agents to truly know what to pay attention to and determine what is a signal and what is noise. Indeed, it is through this exchange of diverse perspectives that this can be ascertained. For example, a front line worker with a systems perspective may see a pattern in data that is unintelligible to a high-level manager if given the opportunity to look at it. That is what sensemaking can look like in the context of developmental evaluation.

“What does that even mean?” 

Sensemaking is essentially the meaning that people give to an experience. Evidence is a part of the sensemaking process, although the manner in which it is used is consistent with a realist approach to science, not a positivist one. Context is critical in the making of sense and the decisions used to act on information gathered from the evaluation. The specific details of the sensemaking process and its key methods are beyond the depth of this post, some key sources and scholars on this topic are listed below. Like developmental evaluation itself, sensemaking is an organic process that brings an element of design, design thinking, strategy and data analytics together in one space. It brings together analysis and synthesis.

From a DE perspective, sensemaking is about understanding what signals and patterns mean within the context of the program and its goals. Even if a program’s goals are broad, there must be some sense of what the program’s purpose is and thus, strategy is a key ingredient to the process of making sense of data. If there is no clearly articulated purpose for the program or a sense of its direction then sensemaking is not going to be a fruitful exercise. Thus, it is nearly impossible to disentangle sensemaking from strategy.

Understanding the system in which the strategy and ideas are to take place — framing — is also critical. An appropriate frame for the program means setting bounds for the system, connecting that to values, goals, desires and hypotheses about outcomes, and the current program context and resources.

Practical sensemaking takes place on a time scale that is appropriate to the complexity of information that sits before the participants in the process. If a sensemaking initiative is done with a complex program that has a rich history and many players involved that history, it is likely that multiple interactions and engagements with participants will be needed to undertake such a process. In part, because the sensemaking process is about surfacing assumptions, revisiting the stated objectives of the program, exploring data in light of those assumptions and goals, and then synthesizing it all to be able to create some means of guiding future action. In some ways, this is about using hindsight and present sight to generate foresight.

Sensemaking is not just about meaning-making, but also a key step in change making for future activities. Sensemaking realizes one of the key aspects of complex systems: that meaning is made in the interactions between things and less about the things themselves.

Building the plane while flying it

In some cases the sense made from data and experience can only be made in the moment. Developmental evaluation has been called “real time” evaluation by some to reflect the notion that evaluation data is made sense of as the program unfolds. To draw on a metaphor illustrated in the video below, sensemaking in developmental evaluation is somewhat like building the plane while flying it.

Like developmental evaluation as a whole, sensemaking isn’t a “one-off” event, rather it is an ongoing process that requires attention throughout the life-cycle of the evaluation. As the evaluator and evaluation team build capacity for sensemaking, the process gets easier and less involved each time its done as the program builds its connection both to its past and present context. However, such connections are tenuous without a larger focus on building in mindfulness to the program — whether organization or network — to ensure that reflections and attention is paid to the activities on an ongoing basis consistent with strategy, complexity and the evaluation itself.

We will look at the role of mindfulness in an upcoming post. Stay tuned.

* The Life Sciences Institute represented a highly complicated program evaluation because it was simultaneously bounded as a physical building, a corporal institution within a larger institution, and a set of collaborative structures that were further complicated by having investigator-led initiatives combined with institutional-level ones where individual investigators were both independent and collaborative. Taken together it was what was considered to be a ‘program’.
References & Further Reading:

Dervin, B. (1983). An overview of sense-making research: Concepts, methods and results to date. International Communication Association Meeting, 1–13.

Klein, G., & Moon, B. (2006). Making sense of sensemaking 1: Alternative perspectives. Intelligent Systems. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667957

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(4),

Kolko, J. (2010a). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis. In Proceedings of the 2010 Design Research Society Montreal Conference on Design & Complexity. Montreal, QC.

Kolko, J. (2010b). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis Understanding Sensemaking and the Role of Perspective in Framing Jon Kolko » Interaction design and design synthesis . In 2010 Design Research Society (DRS) international conference: Design & Complexity (pp. 6–11). Montreal, QC.

Kraut, R., Fussell, S., Brennan, S., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. Distributed work, 137–162. Retrieved from NCI.

Mills, J. H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: the critical sensemaking approach. Qualitative Research in Organizations and Management An International Journal, 5(2), 182–195.

Rowe, A., & Hogarth, A. (2005). Use of complex adaptive systems metaphor to achieve professional and organizational change. Journal of advanced nursing, 51(4), 396–405.

Norman, C. D., Huerta, T. R., Mortimer, S., Best, A., & Buchan, A. (2011). Evaluating discovery in complex systems. American Journal of Evaluation32(1), 70–84.

Weick, K. E. (1995). The Nature of Sensemaking. In Sensemaking in Organizations (pp. 1–62). Sage Publications.

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421.

Photo by the author of Happy Space: NORA, An interactive Study Model at the Museum of Finnish Architecture, Helsinki, FI.

behaviour changecomplexityemergenceevaluationknowledge translation

Bringing Design into Developmental Evaluation

Designing Evaluation

Designing Evaluation

Developmental evaluation is an approach to understanding and shaping programs in service of those who wish to grow and evolve what is done in congruence with complexity rather than ignoring it. This requires not only feedback (evaluation), but skills in using that feedback to shape the program (design) for without both, we may end up doing neither. 

A program operating in an innovation space, one that requires adaptation, foresight and feedback to make adjustments on-the-fly is one that needs developmental design. Developmental design is part of an innovator’s mindset that combines developmental evaluation with design theory, methods and practice. Indeed, I would argue that exceptional developmental evaluations are by their definition examples of developmental design.

Connecting design with developmental evaluation

The idea of developmental design emerged from work I’ve done exploring developmental evaluation in practice in health and social innovation. For years I led a social innovation research unit at the University of Toronto that integrated developmental evaluation with social innovation for health promotion and constantly wrestled with ways to use evidence to inform action. Traditional evidence models are based on positivist social and basic science that aim to hold constant as many variables as possible while manipulating others to enable researchers or evaluators to make cause-and-effect connections. This is a reasonable model when operating in simple systems with few interacting components. However, health promotion and social systems are rarely simple. Indeed, not only are they not simple, they are most often complex (many interactions happening at multiple levels on different timescales simultaneously). Thus, models of evaluation are required that account for complexity.

Doing so requires attention to larger macro-level patterns of activity with a program to assess system-level changes and focus on small, emergent properties that are generated from contextual interactions. Developmental evaluation was first proposed by Michael Quinn Patton who brought together complexity theory with utilization-focused evaluation (PDF) and helped program planners and operators to develop their programs with complexity in mind and supporting innovation. Developmental evaluation provided a means of linking innovation to process and outcomes in a systematic way without creating rigid, inflexible boundaries that are generally incompatible with complex systems.

Developmental evaluation is challenging enough on its own because it requires appreciation of complexity and a flexibility in understanding evaluation, yet also a strong sense of multiple methods of evaluation to accommodate the diversity of inputs and processes that complex systems introduce. However, a further complication is the need to understand how to take that information and apply it meaningfully to the development of the program. This is where design comes in.

Design for better implementation

Design is a field that emerged from the 18th century when mass production was first made possible and no longer was the creative act confined to making unique objects, rather it was expanded to create mass-market ones. Ideas are among the ideas that were mass-produced as the printing press, telegraph and radio combined with the means of creating and distributing these technologies made intellectual products easier to produce as well. Design is what OCADU’s Greg Van Alsytne and Bob Logan refer to as “creation for reproduction” (PDF).

Developmental design links this intention for creation for reproduction and the design for emergence that Van Alsytne and Logan describe with the foundations of developmental evaluation. It links the feedback mechanisms of evaluation with the solution generation that comes from design together.

The field of implementation science emerged from within the health and medical science community after a realization that simple idea sharing and knowledge generation was insufficient to produce change without understanding how such ideas and knowledge were implemented. It came from an acknowledgement that there was a science (or an art) to implementing programs and that by learning how these programs were run and assessed we could do a better job of translating and mobilizing knowledge more effectively. Design is the membrane of sorts that holds all of it together and guides the use of knowledge into the construction and reconstruction of programs. It is the means of holding evaluation data and shaping the program development and implementation questions at the outset.

Without the understanding of how ideas are manifest into a program we are at risk of creating more knowledge and less wisdom, more data and less impact. Just as we made incorrect assumptions that having knowledge was the same as knowing what to do with it or how to share it (which is why fields like knowledge translation and mobilization were born) so too have we made the assumption that program professionals know how to design their programs developmentally. Creating a program from scratch from a blank slate is one thing, but doing a live transformation and re-development is something else.

Developmental design is akin to building a plane while flying it. There are construction skills that are unique to this situation that are different from, but build on, many conventional theories and methods of program planning and evaluation, but like developmental evaluation, extend beyond them to create a novel approach for a particular class of conditions. In future posts I’ll outline some of the concepts of design that are relevant to this enterprise, but in the meantime encourage you to visit the Censemaking Library section on design thinking for some initial resources.

The question remains whether we are building dry docks for ships at sea or platforms for constructing aerial, flexible craft to navigate the changing headwinds and currents?

 

Image used under license.