Bringing Design into Developmental Evaluation

Designing Evaluation
Designing Evaluation

Developmental evaluation is an approach to understanding and shaping programs in service of those who wish to grow and evolve what is done in congruence with complexity rather than ignoring it. This requires not only feedback (evaluation), but skills in using that feedback to shape the program (design) for without both, we may end up doing neither. 

A program operating in an innovation space, one that requires adaptation, foresight and feedback to make adjustments on-the-fly is one that needs developmental design. Developmental design is part of an innovator’s mindset that combines developmental evaluation with design theory, methods and practice. Indeed, I would argue that exceptional developmental evaluations are by their definition examples of developmental design.

Connecting design with developmental evaluation

The idea of developmental design emerged from work I’ve done exploring developmental evaluation in practice in health and social innovation. For years I led a social innovation research unit at the University of Toronto that integrated developmental evaluation with social innovation for health promotion and constantly wrestled with ways to use evidence to inform action. Traditional evidence models are based on positivist social and basic science that aim to hold constant as many variables as possible while manipulating others to enable researchers or evaluators to make cause-and-effect connections. This is a reasonable model when operating in simple systems with few interacting components. However, health promotion and social systems are rarely simple. Indeed, not only are they not simple, they are most often complex (many interactions happening at multiple levels on different timescales simultaneously). Thus, models of evaluation are required that account for complexity.

Doing so requires attention to larger macro-level patterns of activity with a program to assess system-level changes and focus on small, emergent properties that are generated from contextual interactions. Developmental evaluation was first proposed by Michael Quinn Patton who brought together complexity theory with utilization-focused evaluation (PDF) and helped program planners and operators to develop their programs with complexity in mind and supporting innovation. Developmental evaluation provided a means of linking innovation to process and outcomes in a systematic way without creating rigid, inflexible boundaries that are generally incompatible with complex systems.

Developmental evaluation is challenging enough on its own because it requires appreciation of complexity and a flexibility in understanding evaluation, yet also a strong sense of multiple methods of evaluation to accommodate the diversity of inputs and processes that complex systems introduce. However, a further complication is the need to understand how to take that information and apply it meaningfully to the development of the program. This is where design comes in.

Design for better implementation

Design is a field that emerged from the 18th century when mass production was first made possible and no longer was the creative act confined to making unique objects, rather it was expanded to create mass-market ones. Ideas are among the ideas that were mass-produced as the printing press, telegraph and radio combined with the means of creating and distributing these technologies made intellectual products easier to produce as well. Design is what OCADU’s Greg Van Alsytne and Bob Logan refer to as “creation for reproduction” (PDF).

Developmental design links this intention for creation for reproduction and the design for emergence that Van Alsytne and Logan describe with the foundations of developmental evaluation. It links the feedback mechanisms of evaluation with the solution generation that comes from design together.

The field of implementation science emerged from within the health and medical science community after a realization that simple idea sharing and knowledge generation was insufficient to produce change without understanding how such ideas and knowledge were implemented. It came from an acknowledgement that there was a science (or an art) to implementing programs and that by learning how these programs were run and assessed we could do a better job of translating and mobilizing knowledge more effectively. Design is the membrane of sorts that holds all of it together and guides the use of knowledge into the construction and reconstruction of programs. It is the means of holding evaluation data and shaping the program development and implementation questions at the outset.

Without the understanding of how ideas are manifest into a program we are at risk of creating more knowledge and less wisdom, more data and less impact. Just as we made incorrect assumptions that having knowledge was the same as knowing what to do with it or how to share it (which is why fields like knowledge translation and mobilization were born) so too have we made the assumption that program professionals know how to design their programs developmentally. Creating a program from scratch from a blank slate is one thing, but doing a live transformation and re-development is something else.

Developmental design is akin to building a plane while flying it. There are construction skills that are unique to this situation that are different from, but build on, many conventional theories and methods of program planning and evaluation, but like developmental evaluation, extend beyond them to create a novel approach for a particular class of conditions. In future posts I’ll outline some of the concepts of design that are relevant to this enterprise, but in the meantime encourage you to visit the Censemaking Library section on design thinking for some initial resources.

The question remains whether we are building dry docks for ships at sea or platforms for constructing aerial, flexible craft to navigate the changing headwinds and currents?

 

Image used under license.

5 thoughts on “Bringing Design into Developmental Evaluation”

  1. Cameron – those are good thoughts to start the new year with. Evaluation is a key to successful design, and is not used often enough to its full potential. I agree with you that designers of programs, processes, products, and buildings do well to evaluate (and get feedback) from a broad spectrum of stakeholders – before, during, and after the design effort. Particularly where the design of physical space (workplace, heath care environment, for example) is done with evaluation and feedback from the organization who will work in it, greater alignment and synchrony can be achieved, leading to more successful implementation and ability to respond to change in future. I’m curious whether in your experience you have found that programs designed for complex adaptive organizations sometimes fail or are constrained by “legacy structures” of their physical space, its hierarchies, flow, and relationships?

    1. Ron, thanks for your comment. I agree very much that legacy systems and structures are a big part of the issue with adaptation and innovation. Often there is a failure to question the status quo because people are so used to it that they forget the status quo was once something new and maybe innovative. In complex systems these are generally considered to be path dependencies. I wrote about this a couple years ago referring to Jaron Lanier’s work on technology systems and how they pinch in people’s perceptions of what is possible. Another issue is that many of the organizations I’ve dealt with don’t know how to tackle the issue of design, which is one of the reason I’ve spent some time looking at it and shifted some of my energy to training teams and program staff on design. Simply contemplating and acknowledging the role that physical space, org structures and the flows that are created and already in place. It’s all hidden in plain sight and making it visible helps acknowledge (at least as a start) the role that they place. A real challenge.

  2. Interesting. My first thoughts are about the design science research approach and developing design propositions expressed in CIMO-configurations. See e.g. Work of Tranfield, Denyer and Van Aken. Regards Steven de Groot

    1. Steven. This is fantastic. Thanks for sharing this. I just pulled the article and it has a lot of similarities to what I’ve been writing about. I appreciate you reaching out and sharing this!

Comments are closed.

Discover more from Censemaking

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top