Everyone's Talking About Developmental Evaluation
“When it rains, it pours” so says the aphorism about how things tend to cluster. Albert Lazlo-Barabasi has found that pattern to be indicative of a larger complex phenomenon that he calls ‘bursts‘, something worth discussing in another post.
This week, that ‘thing’ seems to be developmental evaluation. I’ve had more conversations, emails and information nuggets placed in my consciousness this week than I have in a long time. It must be worth a post.
Developmental evaluation is a concept widely attributed to Michael Quinn Patton, a true leader in the field of evaluation and its influence on program development and planning. Patton first wrote about the concept in the early 1990’s, although the concept didn’t really take off until recently in parallel with the growing popularity of complexity science and systems thinking approaches to understanding health and human services.
At its root, Developmental Evaluation (DE) is about evaluating a program in ‘real time’ by looking at programs as evolving, complex adaptive systems operating in ecologies that share this same set of organizing principles. This means that there is no definitive manner to assess program impact in concrete terms, nor is any process that is documented through evaluation likely to reveal absolute truths about the manner in which a program will operate in the future or in another context. To traditional evaluators or scientists, this is pure folly, madness or both. When your business is coming up with the answer to a problem, any method that fails to give you ‘the’ answer is problematic.
But as American literary critic H.L. Mencken noted:
“There is always an easy solution to every human problem — neat, plausible and wrong”
Traditional evaluation methods work when problems are simple or even complicated, but rarely do they provide the insight necessary for programs with complex interactions. Most community-based social services fall into this realm as does much of the work done in public health, eHealth, and education. The reason is that there are few ways to standardize programs that are designed to adapt to changing contexts or operate in an environment where there is no stable benchmark to compare.
Public health operates well within the former situation. Disaster management, disease outbreaks, or wide-scale shifts in lifestyle patterns all produce contexts that shift — sometimes radically — so that the practice that works best today, might not be the one that works best tomorrow. We can see this problem demonstrated in the difficulty with ‘best practice’ models of public health and health promotion, which don’t really look like ‘best’ practices, but rather provide some examples of things that worked well in a complex environment. (It is for this reason that I don’t favour or use the term ‘best practice’ in public health, because I simply view too much of it as operating in the realm of the complex, which is something for which the term is not suited.)
eHealth provides an example of the latter. The idea that we can expect to develop, test and implement successful eHealth interventions and tools in a manner that fits with the normal research and evaluation cycle is impractical at best and dangerous at the worst. Three years ago Twitter didn’t exist except in the minds of a few thousand and now has a user population bigger than a large chunk of Europe. Geo-location services like Foursquare, Gowalla and Google Latitude are becoming popular and morphing so quickly that it is impossible to develop a clear standard to follow.
And that is OK, because that is the way things are, not the way evaluators want them to be.
DE seeks to bring some rigour, method and understanding to these problems by creating opportunities to learn from this constant change and use the science of systems to help make sense of what has happened, what is going on now, and to anticipate possible futures for a program. While it is impossible to fully predict what will happen in a complex system due to the myriad interacting variables, we can develop an understanding of a program in a manner that accounts for this complexity and creates useful means of understanding opportunities. This only really works if you embrace complexity rather than try and pretend that things are simple.
For example, evaluation in a complex system considers the program ecology as interactive, relationship-based (and often networked) and dynamic. Many of the traditional evaluation methods seek to understand programs as if they were static. That is, that the lessons of the past can predict the future. What isn’t mentioned, is that we evaluators can ‘game the system’ by developing strategies that can generate data that can fit well into a model, but if the questions are not suited to a dynamic context, the least important parts of the program will be highlighted and thus, the true impact of a program might be missed in the service of developing an acceptable evaluation. It is, what Russell Ackoff called: doing the wrong things righter.
DE also takes evaluation one step further and fits it with Patton’s Utlization-focused evaluation approach., which frames evaluation in a manner that focuses on actionable results. This approach to evaluation integrates the process of problem framing,data collection, analysis, interpretation and use together akin to the concept of knowledge integration. Knowledge integration is the process by which knowledge is generated and applied together, rather than independently, and reflects a systems-oriented approach for knowledge-to-action activities in health and other sciences, with an emphasis on communication.
So hopefully these conversations will continue and that DE will no longer be something that peaks on certain weeks, but rather infuses my colleagues conversations about evaluation and knowledge translation on a regular basis.