Tag: developmental evaluation

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author

innovationsocial innovation

The Finger Pointing to the Moon

SuperLuna

SuperLuna

In social innovation we are at risk of confusing our stories of success for real, genuine impact. Without theories, implementation science or evaluation we risk aspiring to travel to the moon, yet leaving our rockets stuck on the launchpad.  

There is a Buddhist expression that goes like this:

Be careful not to confuse the finger pointing to the moon for the moon itself. *

It’s a wonderful phrase that is playful and yet rich in many meanings. Among the most poignant of these meanings is related to the confusion between representation and reality, something we are starting to see exemplified in the world of social innovation and its related fields like design and systems thinking.

On July 13, 2014 the earth experienced a “supermoon” (captured in the above photograph), named because of its close passage to earth. While it may have seemed also close enough to touch, it was still a distance unfathomable to nearly everyone except a handful on this planet. There was a lot of fingers pointed to the moon that night.

While the moon has held fascination for humans for millennia, it’s also worth drawing our attention to the pointing fingers, too.

Pointing fingers

How often do you hear “we are doing amazing stuff“when hearing about leaders describe their social innovations in the community, universities, government, business or partnerships between them? Thankfully, it’s probably a lot more than ever because the world needs good, quality innovative thinking and action. Indeed, judging from the rhetoric at conferences and events and published literature in the academic literature and popular press it seems we are becoming more innovative all the time.

We are changing the world.

…Except, that is a largely useless statement on its own, even if well meaning.

Without documentation of what this “amazing stuff” looks like, a theory or logic explaining how those activities are connected to an outcome and an observed link between it all (i.e., evaluation) there really is no evidence that the world is changed – or at least changed in a manner that is better than had we done something else or nothing at all. That is the tricky part about working with complex systems, particularly large ones. How the world is changed is subtitle of the the book by Brenda Zimmerman, Frances Westley and Michael Quinn Patton on complexity and evaluation in social change, Getting to Maybe. It is because change requires theory, strategic implementation and evaluation that these three leaders in such topics came together to discuss what can be called social innovation. They introduce theory, strategy and evaluation ideas in the book and — while the book has remained a popular text — I rarely see them referred to in serious conversations about social innovation.

Unfortunately, concrete discussion of these three areas — theory, strategic implementation, and evaluation — is largely absent from the dialogue on social innovation. No more was this evident than in the social innovation week events held across Canada in May and June of this year as part of a series of gatherings between practitioners, researchers and policy makers from all kinds of different sectors and disciplines. The events brought together some of the leading thinkers, funders, institutes and social labs from around the world and was as close to the “social innovation olympics” as one could get. The stories told were inspirational, the diversity in the programming was wide, and the ideas shared were creative and interesting.

And yet, many of those I spoke to (including myself) were left with the question: What do I do with any of this? Without something specific to anchor to that question remained unanswered.

Lots of love, not enough (research) power

As often happens, these gatherings serve more as a rallying cry for those working in a sector — something that is quite important on its own as a critical support mechanism — but less about challenging ourselves. As Geoff Mulgan from Nesta noted in the closing keynote to the Social Frontiers event in Vancouver (and riffing off Adam Kahane’s notion of power and love as a vehicle for social transformation), the week featured a lot of love and not so much expression of power (as in critique).

Reflecting on the social innovation events I’ve attended, the books and articles I’ve read, and the conversations I’ve had in the first six months of 2014 it seems evident that the love is being felt by many, but that it is woefully under-powered (pun intended). The social innovation week events just clustered a lot of this conversation in one week, but it’s a sign of a larger trend that emphasizes storytelling independent of the kind of details that one might find at an academic event. Stories can inspire (love), but they rarely guide (power). Adam Kahane is right: we need both to be successful.

The good news is that we are doing love very well and that’s a great start. However, we need to start thinking about the power part of that equation.

There is a dearth of quality research in the field of social innovation and relatively little in the way of concrete theory or documented practice to guide anyone new to this area of work. Yes, there are many stories, but these offer little beyond inspiration to follow. It’s time to add some guidance and a space for critique to the larger narrative in which these stories are told.

Repeating patterns

What often comes from the Q & A sessions following a presentation of a social innovation initiative are the same answers as ‘lessons learned’:

  • Partnerships and trust are key
  • This is very hard work and its all very complex
  • Relationships are important
  • Get buy-in from stakeholders and bring people together to discuss the issues
  • It always takes longer than you think to do things
  • It’s hard to get and maintain resources

I can’t think of a single presentation over the past six months where these weren’t presented as  ‘take-home messages’.

Yet, none of these answers explain what was done in tangible terms, how well it was done, what alternatives exist (if any), what was the rationale for the program and any research/evidence/theory that underpins that logic, and what unintended consequences have emerged from these initiatives and what evaluated outcomes they had besides numbers of participants/events/dollars moved.

We cannot move forward beyond love if we don’t find some way to power-up our work.

Theories of change: The fingers and the moons

Perhaps the best place to start to remedy this problem of detail is developing a theory of change for social innovation**.

Indeed, the emergence of discourse on theory of change in worlds of social enterprise, innovation and services in recent years has been refreshing. A theory of change is pretty much what it sounds like: a set of interconnected propositions that link ideas to outcomes and the processes that exist between them all. A theory of change answers the question: Why should this idea/program/policy produce (specific) changes?

The strengths of the theory of change movement (as one might call it) is that it is inspiring social innovators to think critically about the logic in their programs at a human scale. More flexible than a program logic model and more detailed than a simple hypothesis, a theory of change can guide strategy and evaluation simultaneously and works well with other social innovation-friendly concepts like developmental evaluation and design.

The weaknesses in the movement is that many theories of change fail to consider what has already been developed. There is an enormous amount of conceptual and empirical work done on behaviour change theories at the individual, organization, community and systems level that can inform a theory of change. Disciplines such as psychology, sociology, political theory, geography and planning, business and organizational behaviour, evolutionary biology and others all have well-researched and developed theories to explain changes in activity. Too often, I see theories developed without knowledge or consideration of such established theories. This is not to say that one must rely on past work (particularly in the innovation space where examples might be few in number), but if a theory is solid and has evidence behind it then it is worth considering. Not all theories are created equal.

It is time for social innovation to start raising the bar for itself and the world it seeks to change. It is time to start advancing theories, strategic implementation and evaluation practice and research so that the social innovation events of the future foster real power for change and not just inspiration and love.

 

* one of the more cited translated versions of this phrase has been attributed to Thich Nhat Hanh who suggests the Buddha remarked: “just as a finger pointing at the moon is not the moon itself. A thinking person makes use of the finger to see the moon. A person who only looks at the finger and mistakes it for the moon will never see the real moon.”

** This actually means many theories of change. A theory of change is program-specific and might be identical to another program and built upon the same foundations as others, but just as a program logic model is unique to each program, so too is a theory of change.

Photo credit: SuperLuna with different filters by Paolo Francolini used under Creative Commons License via Flickr

behaviour changeevaluationinnovation

Beyond the Big and New: Innovating on Quality

The newest, biggest, shiny thing

The newest, biggest, shiny thing

Innovation is a term commonly associated with ‘new’ and sparkly products and things, but that quest for the bigger and more shiny in what we do often obscures the true innovative potential within systems. Rethinking what we mean by innovation and considering the role that quality plays might help us determine whether bigger and glossy is just that, instead of necessarily better. 

Einstein’s oft paraphrased line about new thinking and problems goes something like this:

“Problems cannot be solved with the same mind set that created them.”

In complex conditions, this quest for novel thinking is not just ideal, it’s necessary. However genuine this quest for the new idea and new thing draws heavily upon widely shared human fears of the unknown it is also framed within a context of Western values. Not all cultures revere the new over what came before it, but in the Western world the ‘new’ has become celebrated and none more so than through the word innovation.

Innovation: What’s in a word?

Innovation web

Innovation web

A look at some of the terms associated with innovation (above) finds an emphasis on discovery and design, which can imply a positive sense of wonder and control to those with Westernized sentiments. Indeed, a survey of the landscape of actors, services and products seeking to make positive change in the world finds innovation everywhere and an almost obsessive quest for ideas. What is less attended to is providing a space for these ideas to take flight and answer meaningful, not trivial, questions in an impactful way.

Going Digital Strategy by Tom Fishburne

Going Digital Strategy by Tom Fishburne

I recently attended an event with Zaid Hassan speaking on Social Labs and his new book on the subject. While there was much interest in the way a social lab engages citizens in generating new ideas I was pleased to hear Hassan emphasize that the energy of a successful lab must be directed at the implementation of ideas into practice over just generating new ideas.

Another key point of discussion was the overall challenge of going deep into something and the costs of doing that. This last point got me thinking about the way we frame innovation and what is privileged in that discussion

Innovating beyond the new

Sometimes innovation takes place not only in building new products and services, but in thinking new thoughts, and seeing new possibilities.

Thinking new thoughts requires asking new or better questions of what is happening. As for seeing new possibilities, that might mean looking at things long forgotten and past practices to inform new practice, not just coming up with something novel. Ideas are sexy and fun and generate excitement, yet it is the realization of these ideas that matter more than anything.

The ‘new’ idea might actually be an old one, rethought and re-purposed. The reality for politicians and funders is often confined to equating ‘new’ things with action and work. Yet, re-purposing knowledge and products, re-thinking, or simply developing ideas in an evolutionary manner are harder to see and less sexier to sell to donors and voters.

When new means better, not necessarily bigger

Much of the social innovation sector is consumed or obsessed with scale. The Stanford Social Innovation Review, the key journal for the burgeoning field, is filled with articles, events and blog posts that emphasize the need for scaling social innovations. Scaling, in nearly all of these contexts, means taking an idea to more places to serve more people. The idea of taking a constructive idea that, when realized, benefits as many as possible is hard to argue against, however such a goal is predicated highly upon a number of assumptions about the intervention, population of focus, context, resource allocations and political and social acceptability of what is proposed that are often not aligned.

What is bothersome is that there is nowhere near the concern for quality in these discussions. In public health we often speak of intervention fidelity, intensity, duration, reach, fit and outcome, particularly with those initiatives that have a social component. In this context, there is a real threat in some circumstances of low quality information lest someone make a poorly informed or misleading choice.  We don’t seem to see that same care and attention to other areas of social innovation. Sometimes that is because there is no absolute level of quality to judge or the benefits to greater quality are imperceptibly low.

But I suspect that this is a case of not asking the question about quality in the first place. Apple under Steve Jobs was famous for creating “insanely great” products and using a specific language to back that up. We don’t talk like that in social innovation and I wonder what would happen if we did.

Would we pay more attention to showing impact than just talking about it?

Would we design more with people than for them?

Would we be bolder in our experiments?

Would we be less quick to use knee-jerk dictums around scale and speak of depth of experience and real change?

Would we put resources into evaluation, sensemaking and knowledge translation so we could adequately share our learning with others?

Would we be less hyperbolic and sexy?

Might we be more relevant to more people, more often and (ironically, perhaps) scale social innovation beyond measure?

 

 

Marketoonist Cartoon used under license.

 

 

 

innovation

Acting on Failure or Failure to Act?

3100602594_ce7a92e966_o

Who would have thought that failure would be held up as something to be desired just a few years ago? Yet, it is one thing to extol the virtues of failure in words, it is quite another to create systems that support failure in action and if the latter doesn’t follow the former, failure will truly live up to its name among the innovation trends of the 21st century. 

Ten years ago if someone would have said that failure would be a hot term in 2014 I would have thought that person wasn’t in their right mind, but here we are seeing failure held up as an almost noble act with conferences, books and praise being heaped on those who fail. Failure is now the innovator’s not-so-secret tool for success. As I’ve written before, failure is being treated in a fetishistic manner as this new way to unlock creativity and innovation when what it might be is simply a means reducing people’s anxieties.

Saying it’s OK to fail and actually creating an environment where failure is accepted as a reasonable — maybe even expected — outcome is something altogether different. Take strategic planning. Ever see a strategic plan that includes failure in it? Have you ever seen an organization claim that it will do less of things, fail more often, and learn more through “not-achieving” rather than succeeding?? Probably not.

How often has a performance review for an individual or organization included learning (which is often related to failure) as a meaningful outcome? By this I refer to the kind of learning that comes from experience, from reflective practice, from the journey back and forth through confusion and clarity and from the experimentation of trying and both failing and succeeding. It’s been very rare that I’ve seen that in either corporate or non-profit spaces, at least in any codified form.

But as Peter Drucker once argued: what gets measured, get’s managed.

If we don’t measure failure, we don’t manage for it and nor do our teams include failure as part of their core sets of expectations, activities and outcomes and our plans or aspirations.

Failure, mindfulness and judgement

In 2010 post in Harvard Business Review, Larry Prusak commented on the phenomenon of measurement and noted that judgement — something that comes from experience that includes failure — is commonly missing from our assessments of performance of individuals and organizations alike. Judgement is made based on good information and knowledge, but also experience in using it in practice, reminding me of a quote a wise elder told me:

Good judgment comes from experience, but experience comes from bad judgment.

One of the persistent Gladwellian myths* out there is that of the 10,000 hours rule that suggests if we put that amount of time into something we’re likely to achieve a high level of expertise. This is true only if most of those 10,000 hours were mindful, deliberate ones devoted to the task at hand and involve learning from the successes, failures, processes and outcomes associated with those tasks. That last part about mindful, reflective attention or deliberate practice as the original research calls it (as so many Gladwellian myths suffer from) is left off of most discussions on the subject.

To learn from experience one has to pay attention to what one is doing, what one is thinking while doing it, and assessing the impact (evaluation) of that action once whatever is done is done. For organizations, this requires alignment between what people do and what they intend to do, requiring that mindful evaluation and monitoring be linked to strategy.

If we follow this lead where it takes us is placing failure near the centre of our strategy. How comfortable are you with doing that in your organization?

A failure of failure

Failure is among the most emotionally loaded words in the English language. While I often joke that the term evaluation is the longest four-letter word in the dictionary, failure is not far off. The problem with failure, as noted in an earlier post, is that we’ve been taught that failure is to be avoided and the opposite of success, which is viewed in positive terms.

Yet, there is another reason to question the utility of failure and that is also related to the term success. In the innovation space, what does success mean? This is not a trivial question because if one asks bold questions to seek novel solutions it is very likely that we don’t know what success actually looks like except in its most general sense.

A reading of case studies from Amazon to Apple and Acumen to Ashoka finds that their success looks different than the originators intended. Sometimes this success is far better and more powerful and sometimes its just different, but in all cases the path was littered with lessons and few failures. They succeeded because they learned, not because they failed.

Why? Because those involved in creating these ‘failures’ were paying attention, used the experience as feedback and integrated that into the next stage of development. With each stage comes more lessons and new challenges and thus, failure is only so if there is no learning and reflection. This is not something that can be wished for; it must be built into the organization.

So what to do?

  • Build in the learning capacity for your organization by making learning a priority and creating the time, space and organizational support for getting feedback to support learning. Devoting a small chunk of time to every major meeting to reflecting back what you’re learning is a great way to start.
  • Get the right feedback. Developmental evaluation is an approach that can aid organizations working in the innovation space to be mindful.
  • Ask lots of questions of yourself, your stakeholders, what you do and the systems you’re in.
  • Learn how to design for your particular program context based on feedback coming from the question asking and answering. Design is about experimenting without the expectation of immediate success.
  • Develop safe-fail experiments that allow you to try novel approaches in a context that is of relatively low risk to the entire organization.

There are many ways to do this and systems that can support you in truly building the learning capacity of your organization to be better at innovating while changing the relationship you have with ‘failure’.

For more information about how to do this, CENSE Research + Design offers consultation and training to get organizations up to speed on designing for social innovation.

 

* Refers to ideas popularized by journalist and essayist Malcolm Gladwell that are based on the scientific research of professionals and distilled into accessible forms for mass market reading that become popular and well-known through further social discussion in forms that over-simplify and even distort the original scientific findings. It’s a social version of the “telephone game“. The 10,000 hour ‘rule’ was taken from original research by K. Anders Ericsson and colleagues on deliberate practice and is often discussed in the context of professional (often medical) training, where the original research was focused. This distortion is not something Gladwell intends, rather becomes an artifact of having ideas told over and again between people who may have never seen the original work or even Gladwell’s, but take ideas that become rooted in popular culture. A look at citations on failure and innovation finds that the term deliberate practice is rarely, if ever, used in the discussion of the “10,000 rule”.

 

Photo Credit: Project365Fail by Mark Ordonez used under Creative Commons license via Flickr. Thanks for sharing, Mark!

 

 

complexityemergenceevaluationinnovation

Do you value (social) innovation?

Do You Value the Box or What's In It?

Do You Value the Box or What’s In It?

The term evaluation has at its root the term value and to evaluate innovation means to assess the value that it brings in its product or process of development. It’s remarkable how much discourse there is on the topic of innovation that is devoid of discussion of evaluation, which begs the question: Do we value innovation in the first place?

The question posed above is not a cheeky one. The question about whether or not we value innovation gets at the heart of our insatiable quest for all things innovative.

Historical trends

A look at Google N-gram data for book citations provides a historical picture of how common a particular word shows up in books published since 1880. Running the terms innovation, social innovation and evaluation through the N-gram software finds some curious trends. A look at graphs below finds that the term innovation spiked after the Second World War. A closer look reveals a second major spike in the mid-1990s onward, which is likely due to the rise of the Internet.

In both cases, technology played a big role in shaping the interest in innovation and its discussion. The rise of the cold war in the 1950’s and the Internet both presented new problems to find and the need for such problems to be addressed.

Screenshot 2014-03-05 13.29.28 Screenshot 2014-03-05 13.29.54 Screenshot 2014-03-05 13.30.13

Below that is social innovation, a newer concept (although not as new as many think), which showed a peak in citations in the 1960’s and 70s, which corresponds with the U.S. civil rights movements, expansion of social service fields like social work and community mental health, anti-nuclear organizing, and the environmental movement.  This rise for two decades is followed by a sharp decline until the early 2000’s when things began to increase again.

Evaluation however, saw the most sustained increase over the 20th century of the three terms, yet has been in decline ever since 1982. Most notable is the even sharper decline when both innovation and social innovation spiked.

Keeping in mind that this is not causal or even linked data, it is still worth asking: What’s going on? 

The value of evaluation

Let’s look at what the heart of evaluation is all about: value. The Oxford English Dictionary defines value as:

value |ˈvalyo͞o|

noun

1 the regard that something is held to deserve; the importance, worth, or usefulness of something: your support is of great value.

• the material or monetary worth of something: prints seldom rise in value | equipment is included up to a total value of $500.

• the worth of something compared to the price paid or asked for it: at $12.50 the book is a good value.

2 (values) a person’s principles or standards of behavior; one’s judgment of what is important in life: they internalize their parents’ rules and values.

verb (values, valuing, valued) [ with obj. ]

1 estimate the monetary worth of (something): his estate was valued at $45,000.

2 consider (someone or something) to be important or beneficial; have a high opinion of: she had come to value her privacy and independence.

Innovation is a buzzword. It is hard to find many organizations who do not see themselves as innovative or use the term to describe themselves in some part of their mission, vision or strategic planning documents. A search on bookseller Amazon.com finds more than 63,000 titles organized under “innovation”.

So it seems we like to talk about innovation a great deal, we just don’t like to talk about what it actually does for us (at least in the same measure). Perhaps, if we did this we might have to confront what designer Charles Eames said:

Innovate as a last resort. More horrors are done in the name of innovation than any other.

At the same time I would like to draw inspiration from another of Eames’ quotes:

Most people aren’t trained to want to face the process of re-understanding a subject they already know. One must obtain not just literacy, but deep involvement and re-understanding.

Valuing innovation

Innovation is easier to say than to do and, as Eames suggested, is a last resort when the conventional doesn’t work. For those working in social innovation the “conventional” might not even exist as it deals with the new, the unexpected, the emergent and the complex. It is perhaps not surprising that the book Getting to Maybe: How the World is Changed is co-authored by an evaluator: Michael Quinn Patton.

While Patton has been prolific in advancing the concept of developmental evaluation, the term hasn’t caught on in widespread practice. A look through the social innovation literature finds little mention of developmental evaluation or even evaluation at all, lending support for the extrapolation made above. In my recent post on Zaid Hassan’s book on social laboratories one of my critique points was that there was much discussion about how these social labs “work” with relatively little mention of the evidence to support and clarify the statement.

One hypothesis is that evaluation can be seen a ‘buzzkill’ to the buzzword. It’s much easier, and certainly more fun, to claim you’re changing the world than to do the interrogation of one’s activities to find that the change isn’t as big or profound as one expected. Documentation of change isn’t perceived as fun as making change, although I would argue that one is fuel for the other.

Another hypothesis is that there is much mis-understanding about what evaluation is with (anecdotally) many social innovators thinking that its all about numbers and math and that it misses the essence of the human connections that support what social innovation is all about.

A third hypothesis is that there isn’t the evaluative thinking embedded in our discourse on change, innovation, and social movements that is aligned with the nature of systems and thus, people are stuck with models of evaluation that simply don’t fit the context of what they’re doing and therefore add little of the value that evaluation is meant to reveal.

If we value something, we need to articulate what that means if we want others to follow and value the same thing. That means going beyond lofty, motherhood statements that feel good — community building, relationships, social impact, “making a difference” — and articulating what they really mean. In doing so, we are better position to do more of what works well, change what doesn’t, and create the culture of inquiry and curiosity that links our aspirations to our outcomes.

It means valuing what we say we value.

(As a small plug: want to learn more about this? The Evaluation for Social Innovation workshop takes this idea further and gives you ways to value, evaluate and communicate value. March 20, 2014 in Toronto).

 

 

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.

complexityeducation & learningemergenceevaluationsystems thinking

Developmental Evaluation and Mindfulness

Mindfulness in Motion

Mindfulness in Motion?

Developmental evaluation is focused on real-time decision making for programs operating in complex, changing conditions, which can tax the attentional capacity of program staff and evaluators. Organizational mindfulness is a means of paying attention to what matters and building the capacity across the organization to better filter signals from noise.

Mindfulness is a means of introducing quiet to noisy environments; the kind that are often the focus of developmental evaluations. Like the image above, mindfulness involves remaining calm and centered while everything else is growing, crumbling and (perhaps) disembodied from all that is around it.

Mindfulness in Organizations and Evaluation

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

Mindfulness does not exist independent of the past, rather it takes account of present actions in light of a path to the current context. As simple as it may sound, mindfulness is anything but easy, especially in complex settings with high levels of information sources. What this means for developmental evaluation is that there needs to be a method of capturing data relevant to the present moment, a sensemaking capacity to understand how that data fits within the overall context and system of the program, and a strategy for provoking curiosity about the data to shape innovation. Without attention, sensemaking or interest in exploring the data to innovate there is little likelihood that there will be much change, which is what design (the next step in DE) is all about.

Organizational mindfulness is a quality of social innovation that situates the organization’s activities within a larger strategic frame that developmental evaluation supports. A mindful organization is grounded in a set of beliefs that guide its actions as lived through practice. Without some guiding, grounded models for action an organization can go anywhere and the data collected from a developmental evaluation has little context as nearly anything can develop from that data, yet organizations don’t want anything. They want the solutions that are best optimized for the current context.

Mindfulness for Innovation in Systems

Karl Weick has observed that high-reliability organizations are the way they are because of a mindful orientation. Weick and Karen Sutcliffe explored the concept of organizational mindfulness in greater detail and made the connection to systems thinking, by emphasizing how a mindful orientation opens up the perceptual capabilities of an organization to see their systems differently. They describe a mindful orientation as one that redirects attention from the expected to the unexpected, challenges what is comfortable, consistent, desired and agreed to the areas that challenge all of that.

Weick and Sutcliffe suggest that organizational mindfulness has five core dimensions:

  1. Reluctance to simplify
  2. Sensitivity to operations
  3. Commitment to resilience
  4. Deference to expertise
  5. Preoccupation with failure

Ray, Baker and Plowman (2011) looked at how these qualities were represented in U.S. business schools, finding that there was some evidence for their existence. However, this mindful orientation is still something novel and its overlap with innovation output, unverified. (This is also true for developmental evaluation itself with few published studies illustrating that the fundamentals of developmental evaluation are applied). Vogus and Sutcliffe (2012) took this further and encouraged more research and development in this area in part because of the lack of detailed study of how it works in practice, partly due to an absence of organizational commitment to discovery and change instead of just existing modes of thinking. 

Among the principal reasons for a lack of evidence is that organizational mindfulness requires a substantive re-orientation towards developmental processes that include both evaluation and design. For all of the talk about learning organizations in industry, health, education and social services we see relatively few concrete examples of it in action. A mistake that many evaluators and program planners make is the assumption that the foundations for learning, attention and strategy are all in place before launching a developmental evaluation, which is very often not the case. Just as we do evaluability assessments to see if a program is ready for an evaluation we may wish to consider organizational mindfulness assessments to explore how ready an organization is to engage in a true developmental evaluation. 

Cultivating curiosity

What Weick and Sutcliffe’s five-factor model on organizational mindfulness misses is the second part of the definition of mindfulness introduced at the beginning of this post; the part about curiosity. And while Weick and Sutcliffe speak about the challenging of assumptions in organizational mindfulness, these challenges aren’t well reflected in the model.

Curiosity is a fundamental quality of mindfulness that is often overlooked (not just in organizational contexts). Arthur Zajonc, a physicist, educator and President of the Mind and Life Institute, writes and speaks about contemplative inquiry as a process of employing mindfulness for discovery about the world around us.Zajonc is a scientist and is motivated partly by a love and curiosity of both the inner and outer worlds we inhabit. His mindset — reflective of contemplative inquiry itself — is about an open and focused attention simultaneously.

Openness to new information and experience is one part, while the focus comes from experience and the need to draw in information to clarify intention and actions is the second. These are the same kind of patterns of movement that we see in complex systems (see the stitch image below) and is captured in the sensing-divergent-convergent model of design that is evident in the CENSE Research + Design Innovation arrow model below that.

Stitch of Complexity

Stitch of Complexity

CENSE Corkscrew Innovation Discovery Arrow

CENSE Corkscrew Innovation Discovery Arrow

By being better attuned to the systems (big and small) around us and curiously asking questions about it, we may find that the assumptions we hold are untrue or incomplete. By contemplating fully the moment-by-moment experience of our systems, patterns emerge that are often too weak to notice, but that may drive behaviour in a complex system. This emergence of weak signals is often what shifts systems.

Sensemaking, which we discussed in a previous post in this series, is a means of taking this information and using it to understand the system and the implications of these signals.

For organizations and evaluators the next step is determining whether or not they are willing (and capable) of doing something with the findings from this discovery and learning from a developmental evaluation, which will be covered in the next post in this series that looks at design.

References and Further Reading: 

Bishop, S. R., Lau, M., Shapiro, S., & Carlson, L. (2004). Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice, 11(N3), 230–241.

Ray, J. L., Baker, L. T., & Plowman, D. A. (2011). Organizational mindfulness in business schools. Academy of Management Learning & Education, 10(2), 188–203.

Vogus, T. J., & Sutcliffe, K. M. (2012). Organizational Mindfulness and Mindful Organizing : A Reconciliation and Path Forward. Academy of Management Learning & Education, 11(4), 722–735.

Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Wieck, K. E. (1999). Organizing for high reliability: processes of collective mindfulness. In R. S. Sutton & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 1, pp. 81–123). Stanford, CA: Jai Press.

Weick, K.E. & Sutcliffe, K.M. (2007). Managing the unexpected. San Francisco, CA: Jossey-Bass.

Zajonc, A. (2009). Meditation as contemplative inquiry: When knowing becomes love. Barrington, MA: Lindisfarne Books.