Tag: Michael Quinn Patton

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

environmentsystems sciencesystems thinking

Systems thinking and the simple plan

Building Castles in the Sky

Building Castles in the Sky, But Not Wheels on the Ground

 

Planning is something that is done all the time, but the shape in which these plans unfold is often complex in hidden ways. Without the same resources to evaluate those plans (and make different ones should they change) many organizations are left with great expectations that don’t match the reality of what they do (and can do). 

In my neighbourhood in Toronto there are no fewer than 10 building projects underway that involve development of a high-rise apartment/university residence/condominium on it of more than 20 stories in a 5 block radius from my home. Most are expected to be about 40 stories in height.

As a resident and citizen I was thinking one day: How does one even engage with this? I could attend a building planning meeting, but that would be looking at a single development on a single site, not a neighbourhood. There is a patchwork of plans for neighbourhoods, but they are guidelines, not embedded in specific codes. I was (and am) stuck with how to have a conversation of influence that might help shape decisions about how this was all going to unfold.

At the risk of being pegged as a NIMBY, let me state that I am fully able to accept that downtown living in a fast-growing, large urban centre means that empty lots or parking pads are a target for development and buildings will go up. I get to live here and so should others so I can’t complain about a development here and there. But when we are talking about development of that magnitude so quickly it gets quickly problematic for things like sidewalks, transit, parking, traffic, and even things like getting a seat at my favourite cafe that are all going to change in a matter of months, not years. There’s no evolution here, just revolution.

Adding a few hundred people to the neighbourhood in a year is one thing. Adding many thousand in that same time is something quite different. The problem is that city planning is done on a block-by-block basis when we live in an interconnected space. An example of this is transit. Anyone who takes a bus, streetcar or subway knows that the likelihood of getting a seat depends greatly on when you travel and where you get on. Your experience will radically change when you’re at the beginning of the line or near the end of it. Residents of one neighbourhood in Toronto were so tired of never being able to get on packed streetcars because they were in the middle of the line they crowdfunded a private bus service, which was ultimately shut down a few months later.

Planning for scale: bounding systems using foresight

On a piece-by-piece basis, planning impact is easier to assess. Buildings go through proposals for the lots — a boundary — and have to meet specific codes, which act as constraints on a system. Yet, next to these boundaries are boundaries for other systems; other lots and developments. They, too are given the same treatment and usually that produces a plan perfectly suited to that individual development, but something that might falter when matched with what’s next to it. Building plans are approved and weighed largely on their merits independent of the context and certainly not as a collective set of proposals. Why? Because there are different stakeholders with separate needs, timelines, investments and desires.

One of the keys is to have a vision for what the city will look like as a system.  Does your city have one? I’m not talking about something esoteric like “Be the greatest city in the world”, but generating some evidence-supported form of vision for what the city will look like in 5, 10, 25 years. This requires foresight, a structured, methodical means of drawing evidence-informed speculations about the future that combines design, data, and some imagination. In fact, my colleague Peg Lahn and I did this for the city of Toronto and what we envisioned the future ‘neighbourscapes’ of the city might look like using foresight methods.  We forecast out to 2030, drawing on trends and drivers of social activities and looking at current patterns of migration, development, policy and political activity.

That report focused on the city itself and its neighbourhoods in general, but didn’t look at specific neighbourhoods. Yet, strategic foresight can help create a bounded set of conditions where one can start to imagine the potential impact of decisions in advance and develop scenarios to amplify or mitigate against certain challenges or uncertainties. Foresight allows for better assessment of the landscape of knowns and unknowns within a complex system.

From cities to organizations

The same principles to civic planning through foresight can be applied to organizations. If you are assessing operations and plans for programs independent of one another and not as a whole, yet are operating an organization as a system with all its interdependencies, then without strategic foresight plans may just arbitrary statements of intent. Consider the “5-year plan“. Why is it five years? What is special about 5 years that makes us do that? How about four years? Ten? 18?

As former US President and general Dwight D. Einsenhower once said:

Plans are worthless, but planning is everything.

The planning process, no matter what the time scale, works best when it allows for engagement of ideas about what the future might look like, how to create it, and how to tell when you’ve been successful. This is part of what developmental evaluation does when blended with strategic foresight and design. This creates conversations about what future we want, what we see coming and how we might get to shape it. The plan itself is secondary, but the planning — informed by data and design — is what is the most powerful part of the process.

To draw on another US President, Abraham Lincoln:

The best way to predict your future is to create it.

By focusing on the here and now, independent of what is to come and might be, organizations risk designing perfectly suited programs, policies and strategies that are ideal for the current context, but jeopardize the larger system that is the organization itself.

Do you have a plan? Do you know where you’re going? Can you envision where things are going to be? How will you know when you get there or when to change course?

Resources

For resources on these topics check out the Censemaking library tab on this blog, which has a lot of references to tools and products that can help advance your thinking on strategic foresight, evaluation, design and systems thinking. For those interested in how developmental evaluation can contribute to program development, check out Michael Quinn Patton’s lastest book (with Kate McKegg and Nan Wehipeihana) on Developmental Evaluation Exemplars.

Lastly, if you need strategic help in this work, contact Cense Research + Design as this is what they (we) do.

 

 

 

 

 

innovationsocial innovation

The Finger Pointing to the Moon

SuperLuna

SuperLuna

In social innovation we are at risk of confusing our stories of success for real, genuine impact. Without theories, implementation science or evaluation we risk aspiring to travel to the moon, yet leaving our rockets stuck on the launchpad.  

There is a Buddhist expression that goes like this:

Be careful not to confuse the finger pointing to the moon for the moon itself. *

It’s a wonderful phrase that is playful and yet rich in many meanings. Among the most poignant of these meanings is related to the confusion between representation and reality, something we are starting to see exemplified in the world of social innovation and its related fields like design and systems thinking.

On July 13, 2014 the earth experienced a “supermoon” (captured in the above photograph), named because of its close passage to earth. While it may have seemed also close enough to touch, it was still a distance unfathomable to nearly everyone except a handful on this planet. There was a lot of fingers pointed to the moon that night.

While the moon has held fascination for humans for millennia, it’s also worth drawing our attention to the pointing fingers, too.

Pointing fingers

How often do you hear “we are doing amazing stuff“when hearing about leaders describe their social innovations in the community, universities, government, business or partnerships between them? Thankfully, it’s probably a lot more than ever because the world needs good, quality innovative thinking and action. Indeed, judging from the rhetoric at conferences and events and published literature in the academic literature and popular press it seems we are becoming more innovative all the time.

We are changing the world.

…Except, that is a largely useless statement on its own, even if well meaning.

Without documentation of what this “amazing stuff” looks like, a theory or logic explaining how those activities are connected to an outcome and an observed link between it all (i.e., evaluation) there really is no evidence that the world is changed – or at least changed in a manner that is better than had we done something else or nothing at all. That is the tricky part about working with complex systems, particularly large ones. How the world is changed is subtitle of the the book by Brenda Zimmerman, Frances Westley and Michael Quinn Patton on complexity and evaluation in social change, Getting to Maybe. It is because change requires theory, strategic implementation and evaluation that these three leaders in such topics came together to discuss what can be called social innovation. They introduce theory, strategy and evaluation ideas in the book and — while the book has remained a popular text — I rarely see them referred to in serious conversations about social innovation.

Unfortunately, concrete discussion of these three areas — theory, strategic implementation, and evaluation — is largely absent from the dialogue on social innovation. No more was this evident than in the social innovation week events held across Canada in May and June of this year as part of a series of gatherings between practitioners, researchers and policy makers from all kinds of different sectors and disciplines. The events brought together some of the leading thinkers, funders, institutes and social labs from around the world and was as close to the “social innovation olympics” as one could get. The stories told were inspirational, the diversity in the programming was wide, and the ideas shared were creative and interesting.

And yet, many of those I spoke to (including myself) were left with the question: What do I do with any of this? Without something specific to anchor to that question remained unanswered.

Lots of love, not enough (research) power

As often happens, these gatherings serve more as a rallying cry for those working in a sector — something that is quite important on its own as a critical support mechanism — but less about challenging ourselves. As Geoff Mulgan from Nesta noted in the closing keynote to the Social Frontiers event in Vancouver (and riffing off Adam Kahane’s notion of power and love as a vehicle for social transformation), the week featured a lot of love and not so much expression of power (as in critique).

Reflecting on the social innovation events I’ve attended, the books and articles I’ve read, and the conversations I’ve had in the first six months of 2014 it seems evident that the love is being felt by many, but that it is woefully under-powered (pun intended). The social innovation week events just clustered a lot of this conversation in one week, but it’s a sign of a larger trend that emphasizes storytelling independent of the kind of details that one might find at an academic event. Stories can inspire (love), but they rarely guide (power). Adam Kahane is right: we need both to be successful.

The good news is that we are doing love very well and that’s a great start. However, we need to start thinking about the power part of that equation.

There is a dearth of quality research in the field of social innovation and relatively little in the way of concrete theory or documented practice to guide anyone new to this area of work. Yes, there are many stories, but these offer little beyond inspiration to follow. It’s time to add some guidance and a space for critique to the larger narrative in which these stories are told.

Repeating patterns

What often comes from the Q & A sessions following a presentation of a social innovation initiative are the same answers as ‘lessons learned’:

  • Partnerships and trust are key
  • This is very hard work and its all very complex
  • Relationships are important
  • Get buy-in from stakeholders and bring people together to discuss the issues
  • It always takes longer than you think to do things
  • It’s hard to get and maintain resources

I can’t think of a single presentation over the past six months where these weren’t presented as  ‘take-home messages’.

Yet, none of these answers explain what was done in tangible terms, how well it was done, what alternatives exist (if any), what was the rationale for the program and any research/evidence/theory that underpins that logic, and what unintended consequences have emerged from these initiatives and what evaluated outcomes they had besides numbers of participants/events/dollars moved.

We cannot move forward beyond love if we don’t find some way to power-up our work.

Theories of change: The fingers and the moons

Perhaps the best place to start to remedy this problem of detail is developing a theory of change for social innovation**.

Indeed, the emergence of discourse on theory of change in worlds of social enterprise, innovation and services in recent years has been refreshing. A theory of change is pretty much what it sounds like: a set of interconnected propositions that link ideas to outcomes and the processes that exist between them all. A theory of change answers the question: Why should this idea/program/policy produce (specific) changes?

The strengths of the theory of change movement (as one might call it) is that it is inspiring social innovators to think critically about the logic in their programs at a human scale. More flexible than a program logic model and more detailed than a simple hypothesis, a theory of change can guide strategy and evaluation simultaneously and works well with other social innovation-friendly concepts like developmental evaluation and design.

The weaknesses in the movement is that many theories of change fail to consider what has already been developed. There is an enormous amount of conceptual and empirical work done on behaviour change theories at the individual, organization, community and systems level that can inform a theory of change. Disciplines such as psychology, sociology, political theory, geography and planning, business and organizational behaviour, evolutionary biology and others all have well-researched and developed theories to explain changes in activity. Too often, I see theories developed without knowledge or consideration of such established theories. This is not to say that one must rely on past work (particularly in the innovation space where examples might be few in number), but if a theory is solid and has evidence behind it then it is worth considering. Not all theories are created equal.

It is time for social innovation to start raising the bar for itself and the world it seeks to change. It is time to start advancing theories, strategic implementation and evaluation practice and research so that the social innovation events of the future foster real power for change and not just inspiration and love.

 

* one of the more cited translated versions of this phrase has been attributed to Thich Nhat Hanh who suggests the Buddha remarked: “just as a finger pointing at the moon is not the moon itself. A thinking person makes use of the finger to see the moon. A person who only looks at the finger and mistakes it for the moon will never see the real moon.”

** This actually means many theories of change. A theory of change is program-specific and might be identical to another program and built upon the same foundations as others, but just as a program logic model is unique to each program, so too is a theory of change.

Photo credit: SuperLuna with different filters by Paolo Francolini used under Creative Commons License via Flickr

complexityemergenceevaluationinnovation

Do you value (social) innovation?

Do You Value the Box or What's In It?

Do You Value the Box or What’s In It?

The term evaluation has at its root the term value and to evaluate innovation means to assess the value that it brings in its product or process of development. It’s remarkable how much discourse there is on the topic of innovation that is devoid of discussion of evaluation, which begs the question: Do we value innovation in the first place?

The question posed above is not a cheeky one. The question about whether or not we value innovation gets at the heart of our insatiable quest for all things innovative.

Historical trends

A look at Google N-gram data for book citations provides a historical picture of how common a particular word shows up in books published since 1880. Running the terms innovation, social innovation and evaluation through the N-gram software finds some curious trends. A look at graphs below finds that the term innovation spiked after the Second World War. A closer look reveals a second major spike in the mid-1990s onward, which is likely due to the rise of the Internet.

In both cases, technology played a big role in shaping the interest in innovation and its discussion. The rise of the cold war in the 1950’s and the Internet both presented new problems to find and the need for such problems to be addressed.

Screenshot 2014-03-05 13.29.28 Screenshot 2014-03-05 13.29.54 Screenshot 2014-03-05 13.30.13

Below that is social innovation, a newer concept (although not as new as many think), which showed a peak in citations in the 1960’s and 70s, which corresponds with the U.S. civil rights movements, expansion of social service fields like social work and community mental health, anti-nuclear organizing, and the environmental movement.  This rise for two decades is followed by a sharp decline until the early 2000’s when things began to increase again.

Evaluation however, saw the most sustained increase over the 20th century of the three terms, yet has been in decline ever since 1982. Most notable is the even sharper decline when both innovation and social innovation spiked.

Keeping in mind that this is not causal or even linked data, it is still worth asking: What’s going on? 

The value of evaluation

Let’s look at what the heart of evaluation is all about: value. The Oxford English Dictionary defines value as:

value |ˈvalyo͞o|

noun

1 the regard that something is held to deserve; the importance, worth, or usefulness of something: your support is of great value.

• the material or monetary worth of something: prints seldom rise in value | equipment is included up to a total value of $500.

• the worth of something compared to the price paid or asked for it: at $12.50 the book is a good value.

2 (values) a person’s principles or standards of behavior; one’s judgment of what is important in life: they internalize their parents’ rules and values.

verb (values, valuing, valued) [ with obj. ]

1 estimate the monetary worth of (something): his estate was valued at $45,000.

2 consider (someone or something) to be important or beneficial; have a high opinion of: she had come to value her privacy and independence.

Innovation is a buzzword. It is hard to find many organizations who do not see themselves as innovative or use the term to describe themselves in some part of their mission, vision or strategic planning documents. A search on bookseller Amazon.com finds more than 63,000 titles organized under “innovation”.

So it seems we like to talk about innovation a great deal, we just don’t like to talk about what it actually does for us (at least in the same measure). Perhaps, if we did this we might have to confront what designer Charles Eames said:

Innovate as a last resort. More horrors are done in the name of innovation than any other.

At the same time I would like to draw inspiration from another of Eames’ quotes:

Most people aren’t trained to want to face the process of re-understanding a subject they already know. One must obtain not just literacy, but deep involvement and re-understanding.

Valuing innovation

Innovation is easier to say than to do and, as Eames suggested, is a last resort when the conventional doesn’t work. For those working in social innovation the “conventional” might not even exist as it deals with the new, the unexpected, the emergent and the complex. It is perhaps not surprising that the book Getting to Maybe: How the World is Changed is co-authored by an evaluator: Michael Quinn Patton.

While Patton has been prolific in advancing the concept of developmental evaluation, the term hasn’t caught on in widespread practice. A look through the social innovation literature finds little mention of developmental evaluation or even evaluation at all, lending support for the extrapolation made above. In my recent post on Zaid Hassan’s book on social laboratories one of my critique points was that there was much discussion about how these social labs “work” with relatively little mention of the evidence to support and clarify the statement.

One hypothesis is that evaluation can be seen a ‘buzzkill’ to the buzzword. It’s much easier, and certainly more fun, to claim you’re changing the world than to do the interrogation of one’s activities to find that the change isn’t as big or profound as one expected. Documentation of change isn’t perceived as fun as making change, although I would argue that one is fuel for the other.

Another hypothesis is that there is much mis-understanding about what evaluation is with (anecdotally) many social innovators thinking that its all about numbers and math and that it misses the essence of the human connections that support what social innovation is all about.

A third hypothesis is that there isn’t the evaluative thinking embedded in our discourse on change, innovation, and social movements that is aligned with the nature of systems and thus, people are stuck with models of evaluation that simply don’t fit the context of what they’re doing and therefore add little of the value that evaluation is meant to reveal.

If we value something, we need to articulate what that means if we want others to follow and value the same thing. That means going beyond lofty, motherhood statements that feel good — community building, relationships, social impact, “making a difference” — and articulating what they really mean. In doing so, we are better position to do more of what works well, change what doesn’t, and create the culture of inquiry and curiosity that links our aspirations to our outcomes.

It means valuing what we say we value.

(As a small plug: want to learn more about this? The Evaluation for Social Innovation workshop takes this idea further and gives you ways to value, evaluate and communicate value. March 20, 2014 in Toronto).

 

 

behaviour changecomplexityemergenceevaluationknowledge translation

Bringing Design into Developmental Evaluation

Designing Evaluation

Designing Evaluation

Developmental evaluation is an approach to understanding and shaping programs in service of those who wish to grow and evolve what is done in congruence with complexity rather than ignoring it. This requires not only feedback (evaluation), but skills in using that feedback to shape the program (design) for without both, we may end up doing neither. 

A program operating in an innovation space, one that requires adaptation, foresight and feedback to make adjustments on-the-fly is one that needs developmental design. Developmental design is part of an innovator’s mindset that combines developmental evaluation with design theory, methods and practice. Indeed, I would argue that exceptional developmental evaluations are by their definition examples of developmental design.

Connecting design with developmental evaluation

The idea of developmental design emerged from work I’ve done exploring developmental evaluation in practice in health and social innovation. For years I led a social innovation research unit at the University of Toronto that integrated developmental evaluation with social innovation for health promotion and constantly wrestled with ways to use evidence to inform action. Traditional evidence models are based on positivist social and basic science that aim to hold constant as many variables as possible while manipulating others to enable researchers or evaluators to make cause-and-effect connections. This is a reasonable model when operating in simple systems with few interacting components. However, health promotion and social systems are rarely simple. Indeed, not only are they not simple, they are most often complex (many interactions happening at multiple levels on different timescales simultaneously). Thus, models of evaluation are required that account for complexity.

Doing so requires attention to larger macro-level patterns of activity with a program to assess system-level changes and focus on small, emergent properties that are generated from contextual interactions. Developmental evaluation was first proposed by Michael Quinn Patton who brought together complexity theory with utilization-focused evaluation (PDF) and helped program planners and operators to develop their programs with complexity in mind and supporting innovation. Developmental evaluation provided a means of linking innovation to process and outcomes in a systematic way without creating rigid, inflexible boundaries that are generally incompatible with complex systems.

Developmental evaluation is challenging enough on its own because it requires appreciation of complexity and a flexibility in understanding evaluation, yet also a strong sense of multiple methods of evaluation to accommodate the diversity of inputs and processes that complex systems introduce. However, a further complication is the need to understand how to take that information and apply it meaningfully to the development of the program. This is where design comes in.

Design for better implementation

Design is a field that emerged from the 18th century when mass production was first made possible and no longer was the creative act confined to making unique objects, rather it was expanded to create mass-market ones. Ideas are among the ideas that were mass-produced as the printing press, telegraph and radio combined with the means of creating and distributing these technologies made intellectual products easier to produce as well. Design is what OCADU’s Greg Van Alsytne and Bob Logan refer to as “creation for reproduction” (PDF).

Developmental design links this intention for creation for reproduction and the design for emergence that Van Alsytne and Logan describe with the foundations of developmental evaluation. It links the feedback mechanisms of evaluation with the solution generation that comes from design together.

The field of implementation science emerged from within the health and medical science community after a realization that simple idea sharing and knowledge generation was insufficient to produce change without understanding how such ideas and knowledge were implemented. It came from an acknowledgement that there was a science (or an art) to implementing programs and that by learning how these programs were run and assessed we could do a better job of translating and mobilizing knowledge more effectively. Design is the membrane of sorts that holds all of it together and guides the use of knowledge into the construction and reconstruction of programs. It is the means of holding evaluation data and shaping the program development and implementation questions at the outset.

Without the understanding of how ideas are manifest into a program we are at risk of creating more knowledge and less wisdom, more data and less impact. Just as we made incorrect assumptions that having knowledge was the same as knowing what to do with it or how to share it (which is why fields like knowledge translation and mobilization were born) so too have we made the assumption that program professionals know how to design their programs developmentally. Creating a program from scratch from a blank slate is one thing, but doing a live transformation and re-development is something else.

Developmental design is akin to building a plane while flying it. There are construction skills that are unique to this situation that are different from, but build on, many conventional theories and methods of program planning and evaluation, but like developmental evaluation, extend beyond them to create a novel approach for a particular class of conditions. In future posts I’ll outline some of the concepts of design that are relevant to this enterprise, but in the meantime encourage you to visit the Censemaking Library section on design thinking for some initial resources.

The question remains whether we are building dry docks for ships at sea or platforms for constructing aerial, flexible craft to navigate the changing headwinds and currents?

 

Image used under license.

complexitydesign thinkingemergenceevaluationinnovation

Evaluation and Design For Changing Conditions

Growth and Development

The days of creating programs, products and services and setting them loose on the world are coming to a close posing challenges to the models we use for designing and evaluation. Adding the term ‘developmental’ to both of these concepts with an accompanying shift in mindset can provide options moving forward in these times of great complexity.

We’re at the tail end of a revolution in product and service design that has generated some remarkable benefits for society (and its share of problems), creating the very objects that often define our work (e.g., computers). However, we are in an age of interconnectedness and ever-expanding complexity. Our disciplinary structures are modifying themselves, “wicked problems” are less rare

Developmental Thinking

At the root of the problem is the concept of developmental thought. A critical mistake made in comparative analysis — whether through data or rhetoric — is one that mistakenly views static things to moving things through the same lens. Take for example a tree and a table. Both are made of wood (maybe the same type of wood), yet their developmental trajectories are enormously different.

Wood > Tree

Wood > Table

Tables are relatively static. They may get scratched, painted, re-finished, or modified slightly, but their inherent form, structure and content is likely to remain constant over time. The tree is also made of wood, but will grow larger, may lose branches and gain others; it will interact with the environment providing homes for animals, hiding spaces or swings for small children; bear fruit (or pollen); change leaves; grow around things, yet also maintain some structural integrity that would allow a person to come back after 10 years and recognize that the tree looks similar.

It changes and it interacts with its environment. If it is a banyan tree or an oak, this interaction might take place very slowly, however if it is bamboo that same interaction might take place over a shorter time frame.

If you were to take the antique table shown above, take its measurements and record its qualities and  come back 20 years later, you will likely see an object that looks remarkably similar to the one you lefty. The time of initial observation was minimally relevant to the when the second observation was made. The manner by which the table was used will have some effect on these observations, but to a matter of degree the fundamental look and structure is likely to remain consistent.

However, if we were to do the same with the tree, things could look wildly different. If the tree was a sapling, coming back 20 years might find an object that is 2,3,4 times larger in size. If the tree was 120 years old, the differences might be minimal. It’s species, growing conditions and context matters a great deal.

Design for Development / Developmental Design

In social systems and particularly ones operating with great complexity, models of creating programs, policies and products that simply release into the world like a table are becoming anachronistic. Tables work for simple tasks and sometimes complicated ones, but not complex ones (at least, consistently). It is in those areas that we need to consider the tree as a more appropriate model. However, in human systems these “trees” are designed — we create the social world, the policies, the programs and the products, thus design thinking is relevant and appropriate for those seeking to influence our world.

Yet, we need to go even further. Designing tables means creating a product and setting it loose. Designing for trees means constantly adapting and changing along the way. It is what I call developmental design. Tim Brown, the CEO of IDEO and one of the leading proponents of design thinking, has started to consider the role of design and complexity as well. Writing in the current issue of Rotman Magazine, Brown argues that designers should consider adapting their practice towards complexity. He poses six challenges:

  1. We should give up on the idea of designing objects and think instead about designing behaviours;
  2. We need to think more about how information flows;
  3. We must recognize that faster evolution is based on faster iteration;
  4. We must embrace selective emergence;
  5. We need to focus on fitness;
  6. We must accept the fact that design is never done.
That last point is what I argue is the critical feature of developmental design. To draw on another analogy, it is about tending gardens rather than building tables.

Developmental Evaluation

Brown also mentions information flows and emergence. Complex adaptive systems are the way they are because of the diversity and interaction of information. They are dynamic and evolving and thrive on feedback. Feedback can be random or structured and it is the opportunity and challenge of evaluators to provide the means of collecting and organizing this feedback to channel it to support strategic learning about the benefits, challenges, and unexpected consequences of our designs. Developmental evaluation is a method by which we do this.
Developmental evaluators work with their program teams to advise, co-create, and sense-make around the data generated from program activities. Ideally, a developmental evaluator is engaged with program implementation teams throughout the process. This is a different form of evaluation that builds on Michael Quinn Patton’s Utilization Focused-Evaluation (PDF) methods and can incorporate much of the work of action research and participatory evaluation and research models as well depending on the circumstance.

Bringing Design and Evaluation Together

To design developmentally and with complexity in mind, we need feedback systems in place. This is where developmental design and evaluation come together. If you are working in social innovation, your attention to changing conditions, adaptation, building resilience and (most likely) the need to show impact is familiar to you. Developmental design + developmental evaluation, which I argue are two sides of the same coin, are ways to conceive of the creation, implementation, evaluation, adaptation and evolution of initiatives working in complex environments.
This is not without challenge. Designers are not trained much in evaluation. Few evaluators have experience in design. Both areas are familiarizing themselves with complexity, but the level and depth of the knowledge base is still shallow (but growing). Efforts like those put forth by Social Innovation Generation initiative and the Tamarack Institute for Community Engagement in Canada are good examples of places to start. Books like Getting to Maybe,  M.Q. Patton’s Developmental Evaluation, and Tim Brown’s Change by Design are also primers for moving along.
However, these are start points and if we are serious about addressing the social, political, health and environmental challenges posed to us in this age of global complexity we need to launch from these start points into something more sophisticated that brings these areas further together. The cross training of designers and evaluators and innovators of all stripes is a next step. So, too, is building the scholarship and research base for this emergent field of inquiry and practice. Better theories, evidence and examples will make it easier for all of us to lift the many boats needed to traverse these seas.
It is my hope to contribute to some of that further movement and welcome your thoughts on ways to build developmental thinking in social innovation and social and health service work

Image (Header) Growth by Rougeux

Image (Tree) Arbre en fleur by zigazou76

Image (Table) Table à ouvrage art nouveau (Musée des Beaux-Arts de Lyon) by dalbera

All used under licence.

evaluationknowledge translation

A Call to Evaluation Bloggers: Building A Better KT System

Time To Get Online...

Are you an evaluator and do you blog? If so, the American Evaluation Association wants to hear from you. This CENSEMaking post features an appeal to those who evaluate, blog and want to share their tips and tricks for helping create a better, stronger KT system. 

Build a better moustrap and the world will beat a path to your door — Attributed to Ralph Waldo Emerson

Knowledge translation in 2011 is a lot different than it was before we had social media, the Internet and direct-to-consumer publishing tools. We now have the opportunity to communicate directly to an audience and share our insights in ways that go beyond just technical reports and peer-reviewed publications, but closer to sharing our tacit knowledge. Blogs have become a powerful medium for doing this.

I’ve been blogging for a couple of years and quite enjoy it. As an evaluator, designer, researcher and health promoter I find it allows me to take different ideas and explore them in ways that more established media do not. I don’t need to have the idea perfect, or fully formed, or relevant to a narrow audience. I don’t need to worry about what my peers think or my editor, because I serve as the peer review, editor and publisher all at the same time.

I originally started blogging to share ideas with students and colleagues — just small things about the strange blend of topics I engage in that many don’t know about or understand or wanted to know more of. Concepts like complexity, design thinking, developmental evaluation, and health promotion can get kind of fuzzy or opaque for those outside of those various fields.

Blogs enable us to reach directly to an audience and provide a means of adaptive feedback on ideas that are novel. Using the comments, visit statistics, and direct messages sent to me from readers, I can gain some sense of what ideas are being taken up with people and which one’s resonate. That enables me to tailor my messages and amplify those parts that are of greater utility to a reader, thus increasing the likelihood that a message will be taken up. For CENSEMaking, the purpose is more self-motivated writing rather than trying to assess the “best” messages for the audience, however I have a series of other blogs that I use for projects as a KT tool. These are, in many cases, secured and by invitation only to the project team and stakeholders, but still look and feel like any normal blog.

WordPress (this site) and Posterous are my favorite blogging platforms.

As a KT tool, blogs are becoming more widely used. Sites like Research Blogging are large aggregations of blogs on research topics. Others, like this one, are designed for certain audiences and topics — even KT itself, like the KTExchange from the Research Into Action Action initiative at the University of Texas and MobilizeThis! from the Research Impact Knowledge Mobilization group at York University.

The American Evaluation Association has an interesting blog initiative led by AEA’s Executive Director Susan Kistler called AEA365, which is a tip-a-day blog for evaluators looking to learn more about who and what is happening in their field. A couple of years ago I contributed a post on using information technology and evaluation and was delighted at the response it received. So it reaches people. It’s for this reason that AEA is calling out to evaluation bloggers to contribute to the AEA365 blog with recommendations and examples for how blogging can be used for communications and KT. AEA365 aims to create small-bite pockets of information that are easily digestible by its audience.

If you are interested in contributing, the template for the blog is below, with my upcoming contribution to the AEA365 blog posted below that.

By embracing social media and the power to share ideas directly (and done so responsibly), we have a chance to come closer to realizing the KT dream of putting more effective, useful knowledge into the hands of those that can use it faster and engage those who are most interested and able to use that information more efficiently and humanely.

Interested in submitting a post to the AEA365 blog? Contact the AEA365 curators at aea365@eval.org.

Template for aea365 Blogger Posts (see below for an example)

[Introduce yourself by name, where you work, and the name of your blog]

Rad Resource – [your blog name here]: [describe your blog, explain its focus including the extent to which it is related to evaluation, and tell about how often new content is posted]

Hot Tips – favorite posts: [identify 3-5 posts that you believe highlighting your blogging, giving a direct link and a bit of detail for each (see example)]

  • [post 1]
  • [post 2]
  • Etc.

Lessons Learned – why I blog: [explain why you blog – what you find useful about it and the purpose for your blog and blogging. In particular, are you trying to inform stakeholders or clients? Get new clients? Provide a public service? Help students?]

Lessons Learned: [share at least one thing you have learned about blogging since you started]

Remember – stay under 450 words total please!

My potential contribution (with a title I just made up): Cameron Norman on Making Sense of Complexity, Design, Systems and Evaluation: CENSEMaking

Rad Resource – [CENSEMaking]: CENSEMaking is a play on the name of my research and design studio consultancy and on the concept of sensemaking, something evaluators help with all the time. CENSEMaking focuses on the interplay of systems and design thinking, health promotion and evaluation and weaves together ideas I find in current social issues, reflections on my practice as well as the evidence used to inform it. I aspire to post on CENSEMaking 2-3 times per week, although because it is done in a short-essay format, find the time can be a challenge.

Hot Tips – favorite posts:

  • What is Developmental Evaluation? This post came from a meeting of working group with Michael Quinn Patton and was fun to write because the original exercise that led to the content (described in the post) was so fun to do. It also provided an answer to a question I get asked all the time.
  • Visualizing Evaluation and Feedback. I believe that the better we can visualize complexity the more feedback we provide, the greater the opportunities we have for engaging others, and more evaluations will be utilized. This post was designed to provoke thinking about visualization and illustrate how its been creatively used to present complex data in interesting and accessible ways. My colleague and CENSE partner Andrea Yip has tried to do this with a visually oriented blog on health promoting design, which provides some other creative examples of ways to make ideas more appealing and data feel simpler.
  • Developmental Design and Human Services. Creating this post has sparked an entire line of inquiry for me on bridging DE and design that has since become a major focus for my work. This post became the first step in a larger journey.

Lessons Learned – why I blog: CENSEMaking originally served as an informal means of sharing my practice reflections with students and colleagues, but has since grown to serve as a tool for knowledge translation to a broader professional and lay audience. I aim to bridge the sometimes foggy world that things like evaluation inhabit  — particularly developmental evaluation – and the lived world of people whom evaluation serves.

Lessons Learned: Blogging is a fun way to explore your own thinking about evaluation and make friends along the way. I never expected to meet so many interesting people because they reached out after reading a blog post of mine or made a link to something I wrote. This has also led me to learn about so many other great bloggers, too. Give a little, get a lot in return and don’t try and make it perfect. Make it fun and authentic and that will do.

___

** Photo by digitalrob70 used under Creative Commons License from Flickr