Everyone's Talking About Developmental Evaluation

“When it rains, it pours” so says the aphorism about how things tend to cluster. Albert Lazlo-Barabasi has found that pattern to be indicative of a larger complex phenomenon that he calls ‘bursts‘, something worth discussing in another post.

This week, that ‘thing’ seems to be developmental evaluation. I’ve had more conversations, emails and information nuggets placed in my consciousness this week than I have in a long time. It must be worth a post.

Developmental evaluation is a concept widely attributed to Michael Quinn Patton, a true leader in the field of evaluation and its influence on program development and planning. Patton first wrote about the concept in the early 1990’s, although the concept didn’t really take off until recently in parallel with the growing popularity of complexity science and systems thinking approaches to understanding health and human services.

At its root, Developmental Evaluation (DE) is about evaluating a program in ‘real time’ by looking at programs as evolving, complex adaptive systems operating in ecologies that share this same set of organizing principles. This means that there is no definitive manner to assess program impact in concrete terms, nor is any process that is documented through evaluation likely to reveal absolute truths about the manner in which a program will operate in the future or in another context. To traditional evaluators or scientists, this is pure folly, madness or both. When your business is coming up with the answer to a problem, any method that fails to give you ‘the’ answer is problematic.

But as American literary critic H.L. Mencken noted:

“There is always an easy solution to every human problem — neat, plausible and wrong”

Traditional evaluation methods work when problems are simple or even complicated, but rarely do they provide the insight necessary for programs with complex interactions. Most community-based social services fall into this realm as does much of the work done in public health, eHealth, and education. The reason is that there are few ways to standardize programs that are designed to adapt to changing contexts or operate in an environment where there is no stable benchmark to compare.

Public health operates well within the former situation. Disaster management, disease outbreaks, or wide-scale shifts in lifestyle patterns all produce contexts that shift — sometimes radically — so that the practice that works best today, might not be the one that works best tomorrow. We can see this problem demonstrated in the difficulty with ‘best practice’ models of public health and health promotion, which don’t really look like ‘best’ practices, but rather provide some examples of things that worked well in a complex environment. (It is for this reason that I don’t favour or use the term ‘best practice’ in public health, because I simply view too much of it as operating in the realm of the complex, which is something for which the term is not suited.)

eHealth provides an example of the latter. The idea that we can expect to develop, test and implement successful eHealth interventions and tools in a manner that fits with the normal research and evaluation cycle is impractical at best and dangerous at the worst. Three years ago Twitter didn’t exist except in the minds of a few thousand and now has a user population bigger than a large chunk of Europe. Geo-location services like Foursquare, Gowalla and Google Latitude are becoming popular and morphing so quickly that it is impossible to develop a clear standard to follow.

And that is OK, because that is the way things are, not the way evaluators want them to be.

DE seeks to bring some rigour, method and understanding to these problems by creating opportunities to learn from this constant change and use the science of systems to help make sense of what has happened, what is going on now, and to anticipate possible futures for a program. While it is impossible to fully predict what will happen in a complex system due to the myriad interacting variables, we can develop an understanding of a program in a manner that accounts for this complexity and creates useful means of understanding opportunities. This only really works if you embrace complexity rather than try and pretend that things are simple.

For example, evaluation in a complex system considers the program ecology as interactive, relationship-based (and often networked) and dynamic. Many of the traditional evaluation methods seek to understand programs as if they were static. That is, that the lessons of the past can predict the future. What isn’t mentioned, is that we evaluators can ‘game the system’ by developing strategies that can generate data that can fit well into a model, but if the questions are not suited to a dynamic context, the least important parts of the program will be highlighted and thus, the true impact of a program might be missed in the service of developing an acceptable evaluation. It is, what Russell Ackoff called: doing the wrong things righter.

DE also takes evaluation one step further and fits it with Patton’s Utlization-focused evaluation approach., which frames evaluation in a manner that focuses on actionable results. This approach to evaluation integrates the process of problem framing,data collection, analysis, interpretation and use together akin to the concept of knowledge integration. Knowledge integration is the process by which knowledge is generated and applied together, rather than independently, and reflects a systems-oriented approach for knowledge-to-action activities in health and other sciences, with an emphasis on communication.

So hopefully these conversations will continue and that DE will no longer be something that peaks on certain weeks, but rather infuses my colleagues conversations about evaluation and knowledge translation on a regular basis.

Posted by Cameron D. Norman

I am a designer, psychologist, educator, and strategist focused on innovation in human systems. I'm curious about the world around me and use my role as Principal and President of Cense Ltd. as a means of channeling that curiosity into ideas, questions, and projects that contribute to a better world.


  1. […] I would also argue that this approach requires an evaluation approach that supports incremental evaluation and rapid-response feedback like we see in developmental evaluation (PDF), which I discuss elsewhere. […]

  2. […] research and utilization-focused evaluation (PDF). But perhaps its most close correlate is with Developmental evaluation (DE). DE is an approach that uses complexity-science concepts to inform an iterative approach to […]

  3. […] Within this approach is a shift from sympathy with others in the world, to empathy. It is less about evaluating the world, but rather engaging with it to come up with new insights that can inform its further development. This is really a nod (in my view) to developmental evaluation. […]

  4. […] all of those things change and evolve over a lifetime? It is the kind of task that one might use developmental evaluation to assess if you were looking to determine what kind of impact a particular form of parenting has […]

  5. […] repeat. Take the lessons learned from taking a utilization-focused evaluation (PDF) approach or a developmental evaluation approach (or a combination), implement the necessary changes and […]

  6. […] concluded four days of meetings with a peer group — a community of practice – on Developmental Evaluation. The CoP brings together a diverse group of professionals who are seeking to learn more about DE, […]

  7. […] like developmental evaluation, introduced and discussed in this space before, are ways to respond to this from an evaluation standpoint. DE […]

  8. […] Thinking developmentally means attenuating oneself to nuance, punctuated learning, ongoing feedback, and inconsistent behaviour. I don’t blame people for wanting to impose a simple cause-and-effect narrative on the world, but doing so doesn’t mean its useful. As I’ve argued elsewhere, unless we consider changing our thinking we may continue to spend time devising ways to do what systems thinker and management leader Russell Ackoff called “the wrong things, righter”. […]

  9. […] Generation Group, Michael Quinn Patton  and others who share an interest and wrestle with developmental evaluation (DE) in […]

  10. […] referred to this concept as developmental design. Developmental design, like developmental evaluation, implies an evolved, dynamic approach to generating knowledge or outcomes and while I only loosely […]

  11. […] Developmental evaluation is an approach to understanding the activities of a program operating in dynamic, novel environments with complex interactions. It  focuses on innovation and strategic learning rather than standard outcomes and is as much a way of thinking about programs-in-context and the feedback they produce. The concept is an extension of Michael Quinn Patton’s original concept of Utilization Focused Evaluation with concepts gleaned from complexity science to account for the dynamism and novelty. While Utilization Focused Evaluation has a series of steps to follow (PDF), Developmental Evaluation is less prescriptive, which is both its strength and its challenge for describing it to people (things I’ve discussed in earlier posts). […]

Comments are closed.