Today I’ll be wrapping up a two-day kick off to an initiative aimed at building a community of practice around Developmental Evaluation (PDF), working closely with DE leader and chief proponent, Michael Quinn Patton. The initiative, founded by the Social Innovation Generation group, is designed in part to bring a cohort of learners (or fellows? — we don’t have a name for ourselves) together to explore the challenges and opportunities inherent in Developmental Evaluation as practiced in the world.
In our introductions yesterday I was struck by how much DE clashes with accountability in the minds of many funders and evaluation consumers. The concept strikes me as strange given that DE is idea for providing the close narrative study of programs as they evolve and innovate that clearly demonstrates what the program is doing (although, due to the complex nature of the phenomenon, it may not be able to fully explain it). But as we each shared our experiences and programs, it became clear that, tied to accountability, is an absence of understanding of complexity and the ways it manifests itself in social programs and problems.
Our challenge over the next year together will be how to address these and other issues in our practice.
What surprises me is that, while DE is seen as not rigorous by some, there is such strong adherence to other methods that might be rigorous, but completely inappropriate for the problem, yet that is considered OK. It is as if doing the wrong thing well is better than doing something that is a little different.
This is strange stuff. But that’s why we keep learning it and telling others about it so that they might learn too.