Developmental projects evolve at a pace that suits them, but what happens when the speed and pattern of this process collide with the other projects in life?
The concept of developmental evaluation and developmental design resonate with a lot of people working in social innovation, public health, and international programming. The reason is that, despite the wealth of planning frameworks available and the logic that is embedded within them, the world doesn’t really work according to plan.
As Colin Powell once said (paraphrasing another famous military leader):
No battle plan survives contact with the enemy
While we may accept this as common and expected among our programs, it doesn’t make adapting to these circumstances any easier. And, true to complexity, the more elements added to this mix, the more unpredictable and non-linear things get.
For developmental evaluators, this non-linearity and complexity is part of the job, but when you’re working on multiple projects, that job becomes more challenging to do. When you are a program manager responsible for budgets, ensuring that you have the right staff, accounting for the delays and system dynamics associated with your program delivery is an enormous undertaking and can by itself shape the program that is actually delivered. One can’t justify keeping a staff member on to wait for things to happen; most of the time that person is found other things to do in the interim. However, those “other things” lead to a fragmented attention on what is going on with the program.
Multiply this by manyfold, and you have a truly complex problem affecting a complex program.
What does this mean for a developmental design and evaluation? My motivation for writing this post is to solicit ideas and stories about this problem set and explore some potential solutions. While I personally struggle to maintain the focus and momentum on projects that have extended lags, unpredictable or spontaneous patterns of activity, I know that many of those lags are partly affected by those running the programs having other things on their plate. Its a compounding problem. One person experiences a delay, occupies time with other things that take them away from the project, which create further delays with other elements of the program and so on.
From a design standpoint, this is less problematic. These delays can spur creative reflection and action towards generating a product if the time away from action is used for such mindful attending to ideas.
For developmental evaluation, this is slightly more problematic as the event-process-effect links that we seek to connect together become harder to disentangle. Non-linearity doesn’t mean that there is no such thing as cause and effect. It is just that there are consequences arising from events that are nearly impossible to trace back to a single “cause” (which may not exist), but nonetheless, something does happen that sparks other things. The more one can attend to such things, the better quality the evaluation.
Yet, I argue that the very complexity of the programs require more not less attention when doing evaluation lest we become simple storytellers. We offer more than that. But to do that well requires a sustained level of attention to the dynamics and what we might call paying attention to the silences to glean lessons from non-action that might have significant impact on our programs. This also requires not “filling the time” when things are quiet, but remaining active. Anyone who practices mindfulness meditation knows that non-doing requires a lot of work!
This sounds nice, but how practical is it? And how do we set benchmarks of sort to evaluate the silences and justify such active work in times of quiet? Or do we simply ride momentum like others and hope that we can pick things up when the momentum is high?
Photo Speed of Sound by Ana Patricia Almeida used under Creative Commons License from Flickr
Comments are closed.