Tag: developmental evaluation

evaluationsystems thinking

American Evaluation Association Conference

Over the next few days I’ll be attending the American Evaluation Association conference in San Antonio, Texas. The conference, the biggest gathering of evaluators in the world. Depending on the Internet connections, I will try to do some live tweeting from my @cdnorman and some blogging reflections along the way, so do follow along if you’re interested. In addition to presenting some of the work that I’ve been engaged in on team science with my colleagues at the University of British Columbia and Texas Tech University, I will be looking to connect more with those groups and individuals doing work on systems evaluation and developmental evaluation with an eye to spotting the trends and developments (no pun intended) in those fields.

Evaluation is an interesting area to be a part of. It has no disciplinary home, a set of common practices, but much diversity as well and brings together a fascinating blend of people from all walks of professional life.

Stay tuned.

complexityeducation & learningevaluationsocial systems

Developmental Evaluation And Accountability

Today I’ll be wrapping up a two-day kick off to an initiative aimed at building a community of practice around Developmental Evaluation (PDF), working closely with DE leader and chief proponent, Michael Quinn Patton. The initiative, founded by the Social Innovation Generation group, is designed in part to bring a cohort of learners (or fellows? — we don’t have a name for ourselves) together to explore the challenges and opportunities inherent in Developmental Evaluation as practiced in the world.

In our introductions yesterday I was struck by how much DE clashes with accountability in the minds of many funders and evaluation consumers. The concept strikes me as strange given that DE is idea for providing the close narrative study of programs as they evolve and innovate that clearly demonstrates what the program is doing (although, due to the complex nature of the phenomenon, it may not be able to fully explain it). But as we each shared our experiences and programs, it became clear that, tied to accountability, is an absence of understanding of complexity and the ways it manifests itself in social programs and problems.

Our challenge over the next year together will be how to address these and other issues in our practice.

What surprises me is that, while DE is seen as not rigorous by some, there is such strong adherence to other methods that might be rigorous, but completely inappropriate for the problem, yet that is considered OK. It is as if doing the wrong thing well is better than doing something that is a little different.

This is strange stuff. But that’s why we keep learning it and telling others about it so that they might learn too.

complexityevaluationresearchsocial systemssystems science

Developmental Evaluation: Problems and Opportunities with a Complex Concept

Everyone's Talking About Developmental Evaluation

“When it rains, it pours” so says the aphorism about how things tend to cluster. Albert Lazlo-Barabasi has found that pattern to be indicative of a larger complex phenomenon that he calls ‘bursts‘, something worth discussing in another post.

This week, that ‘thing’ seems to be developmental evaluation. I’ve had more conversations, emails and information nuggets placed in my consciousness this week than I have in a long time. It must be worth a post.

Developmental evaluation is a concept widely attributed to Michael Quinn Patton, a true leader in the field of evaluation and its influence on program development and planning. Patton first wrote about the concept in the early 1990’s, although the concept didn’t really take off until recently in parallel with the growing popularity of complexity science and systems thinking approaches to understanding health and human services.

At its root, Developmental Evaluation (DE) is about evaluating a program in ‘real time’ by looking at programs as evolving, complex adaptive systems operating in ecologies that share this same set of organizing principles. This means that there is no definitive manner to assess program impact in concrete terms, nor is any process that is documented through evaluation likely to reveal absolute truths about the manner in which a program will operate in the future or in another context. To traditional evaluators or scientists, this is pure folly, madness or both. When your business is coming up with the answer to a problem, any method that fails to give you ‘the’ answer is problematic.

But as American literary critic H.L. Mencken noted:

“There is always an easy solution to every human problem — neat, plausible and wrong”

Traditional evaluation methods work when problems are simple or even complicated, but rarely do they provide the insight necessary for programs with complex interactions. Most community-based social services fall into this realm as does much of the work done in public health, eHealth, and education. The reason is that there are few ways to standardize programs that are designed to adapt to changing contexts or operate in an environment where there is no stable benchmark to compare.

Public health operates well within the former situation. Disaster management, disease outbreaks, or wide-scale shifts in lifestyle patterns all produce contexts that shift — sometimes radically — so that the practice that works best today, might not be the one that works best tomorrow. We can see this problem demonstrated in the difficulty with ‘best practice’ models of public health and health promotion, which don’t really look like ‘best’ practices, but rather provide some examples of things that worked well in a complex environment. (It is for this reason that I don’t favour or use the term ‘best practice’ in public health, because I simply view too much of it as operating in the realm of the complex, which is something for which the term is not suited.)

eHealth provides an example of the latter. The idea that we can expect to develop, test and implement successful eHealth interventions and tools in a manner that fits with the normal research and evaluation cycle is impractical at best and dangerous at the worst. Three years ago Twitter didn’t exist except in the minds of a few thousand and now has a user population bigger than a large chunk of Europe. Geo-location services like Foursquare, Gowalla and Google Latitude are becoming popular and morphing so quickly that it is impossible to develop a clear standard to follow.

And that is OK, because that is the way things are, not the way evaluators want them to be.

DE seeks to bring some rigour, method and understanding to these problems by creating opportunities to learn from this constant change and use the science of systems to help make sense of what has happened, what is going on now, and to anticipate possible futures for a program. While it is impossible to fully predict what will happen in a complex system due to the myriad interacting variables, we can develop an understanding of a program in a manner that accounts for this complexity and creates useful means of understanding opportunities. This only really works if you embrace complexity rather than try and pretend that things are simple.

For example, evaluation in a complex system considers the program ecology as interactive, relationship-based (and often networked) and dynamic. Many of the traditional evaluation methods seek to understand programs as if they were static. That is, that the lessons of the past can predict the future. What isn’t mentioned, is that we evaluators can ‘game the system’ by developing strategies that can generate data that can fit well into a model, but if the questions are not suited to a dynamic context, the least important parts of the program will be highlighted and thus, the true impact of a program might be missed in the service of developing an acceptable evaluation. It is, what Russell Ackoff called: doing the wrong things righter.

DE also takes evaluation one step further and fits it with Patton’s Utlization-focused evaluation approach., which frames evaluation in a manner that focuses on actionable results. This approach to evaluation integrates the process of problem framing,data collection, analysis, interpretation and use together akin to the concept of knowledge integration. Knowledge integration is the process by which knowledge is generated and applied together, rather than independently, and reflects a systems-oriented approach for knowledge-to-action activities in health and other sciences, with an emphasis on communication.

So hopefully these conversations will continue and that DE will no longer be something that peaks on certain weeks, but rather infuses my colleagues conversations about evaluation and knowledge translation on a regular basis.