Category: evaluation

design thinkingevaluationsocial innovation

The Ecology of Innovation: Part 1 – Ideas

Innovation Ecology

Innovation Ecology

There is a tendency when looking at innovation to focus on the end result of a process of creation rather than as one node in a larger body of activity, yet expanding our frame of reference to see these connections innovation starts to look look much more like an ecosystem than a simple outcome. This first in a series examines innovation ecology from the place of ideas.

Ideas are the kindling that fuels innovation. Without good ideas, bold ideas, and ideas that have the potential to move our thinking, actions and products further we are left with the status quo: what is, rather than what might be.

What is often missed in the discussion of ideas is the backstory and connections between thoughts that lead to the ideas that may eventually lead to something that becomes an innovation*. This inattention to (or unawareness of) this back story might contribute to reasons why many think they are uncreative or believe they have low innovation potential. Drawing greater attention to these connections and framing that as part of an ecosystem has the potential to not only free people from the tyrrany of having to create the best ideas, but also exposes the wealth of knowledge generated in pursuit of those ideas.

Drawing Connections

Connections  is the title of a book by science historian James Burke that draws on his successful British science documentary series that first aired in the 1970’s and was later recreated in the mid 1990’s. The premise of the book and series is to show how ideas link to one another and build on one another to yield the scientific insights that we see. By viewing ideas in a collective realm, we see how they can and do connect, weaving together a tapestry of knowledge that is far more than the sum of the parts within it.

Too often we see the celebration of innovation as focused on the parts – the products. This is the iPhone, the One World Futbol, the waterless toilet, the intermittent windshield wiper or a process like the Lean system for quality improvement or the use of checklists in medical care. These are the ideas that survive.

The challenge with this perspective on ideas is that it appears to be all-or-nothing: either the idea is good and works or it is not and doesn’t work.

This latter means of thinking imposes judgement on the end result, yet is strangely at odds with innovation itself. It is akin to judging flour, salt, sugar or butter to be bad because a baker’s cake didn’t turn out. Ideas are the building blocks – the DNA if you will — of innovations. But, like DNA (and RNA), it is only in their ability to connect, form and multiply that we really see innovation yield true benefit at a system level. Just like the bakers’ ingredient list, ideas can serve different purposes to different effects in different contexts and the key is knowing (or uncovering) what that looks like and learning what effect it has.

From ideas to ecologies

An alternative to the idea-as-product perspective is to view it as part of a wider system. This takes James Burke’s connections to a new level and actually views ideas as part of a symbiont, interactive, dynamic set of relations. Just like the above example of DNA, there is a lot of perceived ‘junk’ in the collection that may have no obvious benefit, yet by its existence enables the non-junk to reveal and produce its value.

This biological analogy can extend further to the realm of systems. The term ecosystem embodies this thinking:

ecosystem |ˈekōˌsistəm, ˈēkō-| {noun}

Ecology

a biological community of interacting organisms and their physical environment.

• (in general use) a complex network or interconnected system: Silicon Valley’s entrepreneurial ecosystem | the entire ecosystem of movie and video production will eventually go digital.

Within this perspective on biological systems, is the concept of ecology:

ecology |iˈkäləjē| {noun}

1 the branch of biology that deals with the relations of organisms to one another and to their physical surroundings.

2 (also Ecology) the political movement that seeks to protect the environment, especially from pollution.

What is interesting about the definitions above, drawn from the Oxford English Dictionary, is that they focus on biology, the discipline where it first was explored and studied. The definition of biology used in the Wikipedia entry on the topic states:

Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.[1]

Biologists do not look at ecosystems and decide which animals, plants, environments are good or bad and proceed to discount them, rather they look at what each brings to the whole, their role and their relationships. Biology is not without evaluative elements as judgement is still applied to these ‘parts’ of the system as there are certain species, environments and contexts that are more or less beneficial for certain goals or actors/agents in the system than others, but judgement is always contextualized.

Designing for better idea ecologies

Contextual learning is part of sustainable innovation. Unlike natural systems, which function according to hidden rules (“the laws of nature”) that govern ecosystems, human systems are created and intentional; designed. Many of these systems are designed poorly or with little thought to their implications, but because they are designed we can re-design them. Our political systems, social systems, living environments and workplaces are all examples of human systems. Even families are designed systems given the social roles, hierarchies, expectations and membership ‘rules’ that they each follow.

If humans create designed systems we can do the same for the innovation systems we form. By viewing ideas within an ecosystem as part of an innovation ecosystem we offer an opportunity to do more with what we create. Rather than lead a social Darwinian push towards the ‘best’ ideas, an idea ecosystem creates the space for ideas to be repurposed, built upon and revised over time. Thus, our brainstorming doesn’t have to end with whatever we come up with at the end (and may hate anyway), rather it is ongoing.

This commitment to ongoing ideation, sensemaking and innovation (and the knowledge translation, exchange and integration) is what distinguishes a true innovation ecosystem from a good idea done well. In future posts, we’ll look at this concept of the ecosystem in more detail.

Brainstorming Folly

Brainstorming Folly

Tips and Tricks:

Consider recording your ideas and revisiting them over time. Scheduling a brief moment to revisit your notebooks and content periodically and regularly keeps ideas alive. Consider the effort in brainstorming and bringing people together as investments that can yield returns over time, not just a single moment. Shared Evernote notebooks, Google Docs, building (searchable) libraries of artifacts or regular revisiting of project memos can be a simple, low-cost and high-yield way to draw on your collective intellectual investment over time.

* An innovation for this purpose is a new idea realized for benefit.

Image Credits: Top: Evolving ecology of the book (mindmap) by Artefatica used under Creative Commons License from Flickr.

Bottom: Brainstorming Ideas from Tom Fishburne (Marketoonist) used under commercial license.

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author

behaviour changeevaluationinnovation

Beyond the Big and New: Innovating on Quality

The newest, biggest, shiny thing

The newest, biggest, shiny thing

Innovation is a term commonly associated with ‘new’ and sparkly products and things, but that quest for the bigger and more shiny in what we do often obscures the true innovative potential within systems. Rethinking what we mean by innovation and considering the role that quality plays might help us determine whether bigger and glossy is just that, instead of necessarily better. 

Einstein’s oft paraphrased line about new thinking and problems goes something like this:

“Problems cannot be solved with the same mind set that created them.”

In complex conditions, this quest for novel thinking is not just ideal, it’s necessary. However genuine this quest for the new idea and new thing draws heavily upon widely shared human fears of the unknown it is also framed within a context of Western values. Not all cultures revere the new over what came before it, but in the Western world the ‘new’ has become celebrated and none more so than through the word innovation.

Innovation: What’s in a word?

Innovation web

Innovation web

A look at some of the terms associated with innovation (above) finds an emphasis on discovery and design, which can imply a positive sense of wonder and control to those with Westernized sentiments. Indeed, a survey of the landscape of actors, services and products seeking to make positive change in the world finds innovation everywhere and an almost obsessive quest for ideas. What is less attended to is providing a space for these ideas to take flight and answer meaningful, not trivial, questions in an impactful way.

Going Digital Strategy by Tom Fishburne

Going Digital Strategy by Tom Fishburne

I recently attended an event with Zaid Hassan speaking on Social Labs and his new book on the subject. While there was much interest in the way a social lab engages citizens in generating new ideas I was pleased to hear Hassan emphasize that the energy of a successful lab must be directed at the implementation of ideas into practice over just generating new ideas.

Another key point of discussion was the overall challenge of going deep into something and the costs of doing that. This last point got me thinking about the way we frame innovation and what is privileged in that discussion

Innovating beyond the new

Sometimes innovation takes place not only in building new products and services, but in thinking new thoughts, and seeing new possibilities.

Thinking new thoughts requires asking new or better questions of what is happening. As for seeing new possibilities, that might mean looking at things long forgotten and past practices to inform new practice, not just coming up with something novel. Ideas are sexy and fun and generate excitement, yet it is the realization of these ideas that matter more than anything.

The ‘new’ idea might actually be an old one, rethought and re-purposed. The reality for politicians and funders is often confined to equating ‘new’ things with action and work. Yet, re-purposing knowledge and products, re-thinking, or simply developing ideas in an evolutionary manner are harder to see and less sexier to sell to donors and voters.

When new means better, not necessarily bigger

Much of the social innovation sector is consumed or obsessed with scale. The Stanford Social Innovation Review, the key journal for the burgeoning field, is filled with articles, events and blog posts that emphasize the need for scaling social innovations. Scaling, in nearly all of these contexts, means taking an idea to more places to serve more people. The idea of taking a constructive idea that, when realized, benefits as many as possible is hard to argue against, however such a goal is predicated highly upon a number of assumptions about the intervention, population of focus, context, resource allocations and political and social acceptability of what is proposed that are often not aligned.

What is bothersome is that there is nowhere near the concern for quality in these discussions. In public health we often speak of intervention fidelity, intensity, duration, reach, fit and outcome, particularly with those initiatives that have a social component. In this context, there is a real threat in some circumstances of low quality information lest someone make a poorly informed or misleading choice.  We don’t seem to see that same care and attention to other areas of social innovation. Sometimes that is because there is no absolute level of quality to judge or the benefits to greater quality are imperceptibly low.

But I suspect that this is a case of not asking the question about quality in the first place. Apple under Steve Jobs was famous for creating “insanely great” products and using a specific language to back that up. We don’t talk like that in social innovation and I wonder what would happen if we did.

Would we pay more attention to showing impact than just talking about it?

Would we design more with people than for them?

Would we be bolder in our experiments?

Would we be less quick to use knee-jerk dictums around scale and speak of depth of experience and real change?

Would we put resources into evaluation, sensemaking and knowledge translation so we could adequately share our learning with others?

Would we be less hyperbolic and sexy?

Might we be more relevant to more people, more often and (ironically, perhaps) scale social innovation beyond measure?

 

 

Marketoonist Cartoon used under license.

 

 

 

complexityemergenceevaluationinnovation

Do you value (social) innovation?

Do You Value the Box or What's In It?

Do You Value the Box or What’s In It?

The term evaluation has at its root the term value and to evaluate innovation means to assess the value that it brings in its product or process of development. It’s remarkable how much discourse there is on the topic of innovation that is devoid of discussion of evaluation, which begs the question: Do we value innovation in the first place?

The question posed above is not a cheeky one. The question about whether or not we value innovation gets at the heart of our insatiable quest for all things innovative.

Historical trends

A look at Google N-gram data for book citations provides a historical picture of how common a particular word shows up in books published since 1880. Running the terms innovation, social innovation and evaluation through the N-gram software finds some curious trends. A look at graphs below finds that the term innovation spiked after the Second World War. A closer look reveals a second major spike in the mid-1990s onward, which is likely due to the rise of the Internet.

In both cases, technology played a big role in shaping the interest in innovation and its discussion. The rise of the cold war in the 1950’s and the Internet both presented new problems to find and the need for such problems to be addressed.

Screenshot 2014-03-05 13.29.28 Screenshot 2014-03-05 13.29.54 Screenshot 2014-03-05 13.30.13

Below that is social innovation, a newer concept (although not as new as many think), which showed a peak in citations in the 1960’s and 70s, which corresponds with the U.S. civil rights movements, expansion of social service fields like social work and community mental health, anti-nuclear organizing, and the environmental movement.  This rise for two decades is followed by a sharp decline until the early 2000’s when things began to increase again.

Evaluation however, saw the most sustained increase over the 20th century of the three terms, yet has been in decline ever since 1982. Most notable is the even sharper decline when both innovation and social innovation spiked.

Keeping in mind that this is not causal or even linked data, it is still worth asking: What’s going on? 

The value of evaluation

Let’s look at what the heart of evaluation is all about: value. The Oxford English Dictionary defines value as:

value |ˈvalyo͞o|

noun

1 the regard that something is held to deserve; the importance, worth, or usefulness of something: your support is of great value.

• the material or monetary worth of something: prints seldom rise in value | equipment is included up to a total value of $500.

• the worth of something compared to the price paid or asked for it: at $12.50 the book is a good value.

2 (values) a person’s principles or standards of behavior; one’s judgment of what is important in life: they internalize their parents’ rules and values.

verb (values, valuing, valued) [ with obj. ]

1 estimate the monetary worth of (something): his estate was valued at $45,000.

2 consider (someone or something) to be important or beneficial; have a high opinion of: she had come to value her privacy and independence.

Innovation is a buzzword. It is hard to find many organizations who do not see themselves as innovative or use the term to describe themselves in some part of their mission, vision or strategic planning documents. A search on bookseller Amazon.com finds more than 63,000 titles organized under “innovation”.

So it seems we like to talk about innovation a great deal, we just don’t like to talk about what it actually does for us (at least in the same measure). Perhaps, if we did this we might have to confront what designer Charles Eames said:

Innovate as a last resort. More horrors are done in the name of innovation than any other.

At the same time I would like to draw inspiration from another of Eames’ quotes:

Most people aren’t trained to want to face the process of re-understanding a subject they already know. One must obtain not just literacy, but deep involvement and re-understanding.

Valuing innovation

Innovation is easier to say than to do and, as Eames suggested, is a last resort when the conventional doesn’t work. For those working in social innovation the “conventional” might not even exist as it deals with the new, the unexpected, the emergent and the complex. It is perhaps not surprising that the book Getting to Maybe: How the World is Changed is co-authored by an evaluator: Michael Quinn Patton.

While Patton has been prolific in advancing the concept of developmental evaluation, the term hasn’t caught on in widespread practice. A look through the social innovation literature finds little mention of developmental evaluation or even evaluation at all, lending support for the extrapolation made above. In my recent post on Zaid Hassan’s book on social laboratories one of my critique points was that there was much discussion about how these social labs “work” with relatively little mention of the evidence to support and clarify the statement.

One hypothesis is that evaluation can be seen a ‘buzzkill’ to the buzzword. It’s much easier, and certainly more fun, to claim you’re changing the world than to do the interrogation of one’s activities to find that the change isn’t as big or profound as one expected. Documentation of change isn’t perceived as fun as making change, although I would argue that one is fuel for the other.

Another hypothesis is that there is much mis-understanding about what evaluation is with (anecdotally) many social innovators thinking that its all about numbers and math and that it misses the essence of the human connections that support what social innovation is all about.

A third hypothesis is that there isn’t the evaluative thinking embedded in our discourse on change, innovation, and social movements that is aligned with the nature of systems and thus, people are stuck with models of evaluation that simply don’t fit the context of what they’re doing and therefore add little of the value that evaluation is meant to reveal.

If we value something, we need to articulate what that means if we want others to follow and value the same thing. That means going beyond lofty, motherhood statements that feel good — community building, relationships, social impact, “making a difference” — and articulating what they really mean. In doing so, we are better position to do more of what works well, change what doesn’t, and create the culture of inquiry and curiosity that links our aspirations to our outcomes.

It means valuing what we say we value.

(As a small plug: want to learn more about this? The Evaluation for Social Innovation workshop takes this idea further and gives you ways to value, evaluate and communicate value. March 20, 2014 in Toronto).

 

 

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.

complexityeducation & learningemergenceevaluationsystems thinking

Developmental Evaluation and Mindfulness

Mindfulness in Motion

Mindfulness in Motion?

Developmental evaluation is focused on real-time decision making for programs operating in complex, changing conditions, which can tax the attentional capacity of program staff and evaluators. Organizational mindfulness is a means of paying attention to what matters and building the capacity across the organization to better filter signals from noise.

Mindfulness is a means of introducing quiet to noisy environments; the kind that are often the focus of developmental evaluations. Like the image above, mindfulness involves remaining calm and centered while everything else is growing, crumbling and (perhaps) disembodied from all that is around it.

Mindfulness in Organizations and Evaluation

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

Mindfulness does not exist independent of the past, rather it takes account of present actions in light of a path to the current context. As simple as it may sound, mindfulness is anything but easy, especially in complex settings with high levels of information sources. What this means for developmental evaluation is that there needs to be a method of capturing data relevant to the present moment, a sensemaking capacity to understand how that data fits within the overall context and system of the program, and a strategy for provoking curiosity about the data to shape innovation. Without attention, sensemaking or interest in exploring the data to innovate there is little likelihood that there will be much change, which is what design (the next step in DE) is all about.

Organizational mindfulness is a quality of social innovation that situates the organization’s activities within a larger strategic frame that developmental evaluation supports. A mindful organization is grounded in a set of beliefs that guide its actions as lived through practice. Without some guiding, grounded models for action an organization can go anywhere and the data collected from a developmental evaluation has little context as nearly anything can develop from that data, yet organizations don’t want anything. They want the solutions that are best optimized for the current context.

Mindfulness for Innovation in Systems

Karl Weick has observed that high-reliability organizations are the way they are because of a mindful orientation. Weick and Karen Sutcliffe explored the concept of organizational mindfulness in greater detail and made the connection to systems thinking, by emphasizing how a mindful orientation opens up the perceptual capabilities of an organization to see their systems differently. They describe a mindful orientation as one that redirects attention from the expected to the unexpected, challenges what is comfortable, consistent, desired and agreed to the areas that challenge all of that.

Weick and Sutcliffe suggest that organizational mindfulness has five core dimensions:

  1. Reluctance to simplify
  2. Sensitivity to operations
  3. Commitment to resilience
  4. Deference to expertise
  5. Preoccupation with failure

Ray, Baker and Plowman (2011) looked at how these qualities were represented in U.S. business schools, finding that there was some evidence for their existence. However, this mindful orientation is still something novel and its overlap with innovation output, unverified. (This is also true for developmental evaluation itself with few published studies illustrating that the fundamentals of developmental evaluation are applied). Vogus and Sutcliffe (2012) took this further and encouraged more research and development in this area in part because of the lack of detailed study of how it works in practice, partly due to an absence of organizational commitment to discovery and change instead of just existing modes of thinking. 

Among the principal reasons for a lack of evidence is that organizational mindfulness requires a substantive re-orientation towards developmental processes that include both evaluation and design. For all of the talk about learning organizations in industry, health, education and social services we see relatively few concrete examples of it in action. A mistake that many evaluators and program planners make is the assumption that the foundations for learning, attention and strategy are all in place before launching a developmental evaluation, which is very often not the case. Just as we do evaluability assessments to see if a program is ready for an evaluation we may wish to consider organizational mindfulness assessments to explore how ready an organization is to engage in a true developmental evaluation. 

Cultivating curiosity

What Weick and Sutcliffe’s five-factor model on organizational mindfulness misses is the second part of the definition of mindfulness introduced at the beginning of this post; the part about curiosity. And while Weick and Sutcliffe speak about the challenging of assumptions in organizational mindfulness, these challenges aren’t well reflected in the model.

Curiosity is a fundamental quality of mindfulness that is often overlooked (not just in organizational contexts). Arthur Zajonc, a physicist, educator and President of the Mind and Life Institute, writes and speaks about contemplative inquiry as a process of employing mindfulness for discovery about the world around us.Zajonc is a scientist and is motivated partly by a love and curiosity of both the inner and outer worlds we inhabit. His mindset — reflective of contemplative inquiry itself — is about an open and focused attention simultaneously.

Openness to new information and experience is one part, while the focus comes from experience and the need to draw in information to clarify intention and actions is the second. These are the same kind of patterns of movement that we see in complex systems (see the stitch image below) and is captured in the sensing-divergent-convergent model of design that is evident in the CENSE Research + Design Innovation arrow model below that.

Stitch of Complexity

Stitch of Complexity

CENSE Corkscrew Innovation Discovery Arrow

CENSE Corkscrew Innovation Discovery Arrow

By being better attuned to the systems (big and small) around us and curiously asking questions about it, we may find that the assumptions we hold are untrue or incomplete. By contemplating fully the moment-by-moment experience of our systems, patterns emerge that are often too weak to notice, but that may drive behaviour in a complex system. This emergence of weak signals is often what shifts systems.

Sensemaking, which we discussed in a previous post in this series, is a means of taking this information and using it to understand the system and the implications of these signals.

For organizations and evaluators the next step is determining whether or not they are willing (and capable) of doing something with the findings from this discovery and learning from a developmental evaluation, which will be covered in the next post in this series that looks at design.

 

References and Further Reading: 

Bishop, S. R., Lau, M., Shapiro, S., & Carlson, L. (2004). Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice, 11(N3), 230–241.

Ray, J. L., Baker, L. T., & Plowman, D. A. (2011). Organizational mindfulness in business schools. Academy of Management Learning & Education, 10(2), 188–203.

Vogus, T. J., & Sutcliffe, K. M. (2012). Organizational Mindfulness and Mindful Organizing : A Reconciliation and Path Forward. Academy of Management Learning & Education, 11(4), 722–735.

Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Wieck, K. E. (1999). Organizing for high reliability: processes of collective mindfulness. In R. S. Sutton & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 1, pp. 81–123). Stanford, CA: Jai Press.

Weick, K.E. & Sutcliffe, K.M. (2007). Managing the unexpected. San Francisco, CA: Jossey-Bass.

Zajonc, A. (2009). Meditation as contemplative inquiry: When knowing becomes love. Barrington, MA: Lindisfarne Books.

complexitydesign thinkingemergenceevaluationsystems thinking

Developmental Evaluation and Sensemaking

Sensing Patterns

Sensing Patterns, Seeing Pathways

Developmental evaluation is only as good as the sense that can be made from the data that is received. To assume that program staff and evaluators know how to do this might be one of the reasons developmental evaluations end up as something less than they promise. 

Developmental Evaluation (DE) is becoming a popular subject in the evaluation world. As we see greater recognition of complexity as a factor in program planning and operations and what it means for evaluations it is safe to assume that developmental evaluation will continue to attract interest from program staff and evaluation professionals alike.

Yet, developmental evaluation is as much a mindset as it is a toolset and skillset; all of which are needed to do it well. In this third in a series of posts on developmental evaluation we look at the concept of sensemaking and its role in understanding program data in a DE context.

The architecture of signals and sense

Sensemaking and developmental evaluation involve creating an architecture for knowledge,  framing the space for emergence and learning (boundary specification), extracting the shapes and patterns of what lies within that space, and then working to understand the meaning behind those patterns and their significance for the program under investigation. A developmental evaluation with a sensemaking component creates a plan for how to look at a program and learn from what kind of data is generated in light of what has been done and what is to be done next.

Patterns may be knowledge, behaviour, attitudes, policies, physical structures, organizational structures, networks, financial incentives or regulations. These are the kinds of activities that are likely to create or serve as attractors within a complex system.

To illustrate, architecture can be both a literal and figurative term. In a five-year evaluation and study of scientific collaboration at the University of British Columbia’s Life Sciences Institute, my colleagues Tim Huerta, Alison Buchan and Sharon Mortimer and I explored many of these multidimensional aspects of the program* / institution and published our findings in the American Journal of Evaluation and Research Evaluation journals. We looked a spatial configurations by doing proximity measurements that connected where people work to whom they work with and what they generated. Research has indicated that physical proximity makes a difference to collaboration (E.g.,: Kraut et al., 2002). There is relatively little concrete evaluation on the role of space in collaboration, mostly just inferences from network studies (which we also conducted). Few have actually gone into the physical areas and measured distance and people’s locations.

Why mention this? Because from a sense-making perspective those signals provided by the building itself had an enormous impact on the psychology of the collaborations, even if it was only a minor influence on the productivity. The architecture of the networks themselves was also a key variable that went beyond simple exchanges of information, but without seeing collaborations as networks it is possible that we would have never understood why certain activities produced outcomes and others did not.

The same thing exists with cognitive architecture: it is the spatial organization of thoughts, ideas, and social constructions. Organizational charts, culture, policies, and regulations all share in the creation of the cognitive architecture of a program.

Signals and noise

The key is to determine what kind of signals to pay attention to at the beginning. And as mentioned in a previous post, design and design thinking is a good precursor and adjunct to an evaluation process (and, as I’ve argued before and will elaborate on, is integral to effective developmental evaluation). Patterns could be in almost anything and made up of physical, psychological, social and ‘atmospheric’ (org and societal environmental) data.

This might sound a bit esoteric, but by viewing these different domains through an eye of curiousity, we can see patterns that permit evaluators to measure, monitor, observe and otherwise record to use as substance for programs to make decisions based on. This can be qualitative, quantitative, mixed-methods, archival and document-based or some combination. Complex programs are highly context-sensitive, so the sense-making process must include diverse stakeholders that reflect the very conditions in which the data is collected. Thus, if we are involving front-line worker data, then they need to be involved.

The manner in which this is done can be more or less participatory and involved depending on resources, constraints, values and so forth, but there needs to be some perspective taking from these diverse agents to truly know what to pay attention to and determine what is a signal and what is noise. Indeed, it is through this exchange of diverse perspectives that this can be ascertained. For example, a front line worker with a systems perspective may see a pattern in data that is unintelligible to a high-level manager if given the opportunity to look at it. That is what sensemaking can look like in the context of developmental evaluation.

“What does that even mean?” 

Sensemaking is essentially the meaning that people give to an experience. Evidence is a part of the sensemaking process, although the manner in which it is used is consistent with a realist approach to science, not a positivist one. Context is critical in the making of sense and the decisions used to act on information gathered from the evaluation. The specific details of the sensemaking process and its key methods are beyond the depth of this post, some key sources and scholars on this topic are listed below. Like developmental evaluation itself, sensemaking is an organic process that brings an element of design, design thinking, strategy and data analytics together in one space. It brings together analysis and synthesis.

From a DE perspective, sensemaking is about understanding what signals and patterns mean within the context of the program and its goals. Even if a program’s goals are broad, there must be some sense of what the program’s purpose is and thus, strategy is a key ingredient to the process of making sense of data. If there is no clearly articulated purpose for the program or a sense of its direction then sensemaking is not going to be a fruitful exercise. Thus, it is nearly impossible to disentangle sensemaking from strategy.

Understanding the system in which the strategy and ideas are to take place — framing — is also critical. An appropriate frame for the program means setting bounds for the system, connecting that to values, goals, desires and hypotheses about outcomes, and the current program context and resources.

Practical sensemaking takes place on a time scale that is appropriate to the complexity of information that sits before the participants in the process. If a sensemaking initiative is done with a complex program that has a rich history and many players involved that history, it is likely that multiple interactions and engagements with participants will be needed to undertake such a process. In part, because the sensemaking process is about surfacing assumptions, revisiting the stated objectives of the program, exploring data in light of those assumptions and goals, and then synthesizing it all to be able to create some means of guiding future action. In some ways, this is about using hindsight and present sight to generate foresight.

Sensemaking is not just about meaning-making, but also a key step in change making for future activities. Sensemaking realizes one of the key aspects of complex systems: that meaning is made in the interactions between things and less about the things themselves.

Building the plane while flying it

In some cases the sense made from data and experience can only be made in the moment. Developmental evaluation has been called “real time” evaluation by some to reflect the notion that evaluation data is made sense of as the program unfolds. To draw on a metaphor illustrated in the video below, sensemaking in developmental evaluation is somewhat like building the plane while flying it.

Like developmental evaluation as a whole, sensemaking isn’t a “one-off” event, rather it is an ongoing process that requires attention throughout the life-cycle of the evaluation. As the evaluator and evaluation team build capacity for sensemaking, the process gets easier and less involved each time its done as the program builds its connection both to its past and present context. However, such connections are tenuous without a larger focus on building in mindfulness to the program — whether organization or network — to ensure that reflections and attention is paid to the activities on an ongoing basis consistent with strategy, complexity and the evaluation itself.

We will look at the role of mindfulness in an upcoming post. Stay tuned.

* The Life Sciences Institute represented a highly complicated program evaluation because it was simultaneously bounded as a physical building, a corporal institution within a larger institution, and a set of collaborative structures that were further complicated by having investigator-led initiatives combined with institutional-level ones where individual investigators were both independent and collaborative. Taken together it was what was considered to be a ‘program’.
References & Further Reading:

Dervin, B. (1983). An overview of sense-making research: Concepts, methods and results to date. International Communication Association Meeting, 1–13.

Klein, G., & Moon, B. (2006). Making sense of sensemaking 1: Alternative perspectives. Intelligent Systems. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667957

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(4),

Kolko, J. (2010a). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis. In Proceedings of the 2010 Design Research Society Montreal Conference on Design & Complexity. Montreal, QC.

Kolko, J. (2010b). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis Understanding Sensemaking and the Role of Perspective in Framing Jon Kolko » Interaction design and design synthesis . In 2010 Design Research Society (DRS) international conference: Design & Complexity (pp. 6–11). Montreal, QC.

Kraut, R., Fussell, S., Brennan, S., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. Distributed work, 137–162. Retrieved from NCI.

Mills, J. H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: the critical sensemaking approach. Qualitative Research in Organizations and Management An International Journal, 5(2), 182–195.

Rowe, A., & Hogarth, A. (2005). Use of complex adaptive systems metaphor to achieve professional and organizational change. Journal of advanced nursing, 51(4), 396–405.

Norman, C. D., Huerta, T. R., Mortimer, S., Best, A., & Buchan, A. (2011). Evaluating discovery in complex systems. American Journal of Evaluation32(1), 70–84.

Weick, K. E. (1995). The Nature of Sensemaking. In Sensemaking in Organizations (pp. 1–62). Sage Publications.

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421.

Photo by the author of Happy Space: NORA, An interactive Study Model at the Museum of Finnish Architecture, Helsinki, FI.