Developmental evaluation is an approach to understanding and shaping programs in service of those who wish to grow and evolve what is done in congruence with complexity rather than ignoring it. This requires not only feedback (evaluation), but skills in using that feedback to shape the program (design) for without both, we may end up doing neither.
A program operating in an innovation space, one that requires adaptation, foresight and feedback to make adjustments on-the-fly is one that needs developmental design. Developmental design is part of an innovator’s mindset that combines developmental evaluation with design theory, methods and practice. Indeed, I would argue that exceptional developmental evaluations are by their definition examples of developmental design.
Connecting design with developmental evaluation
The idea of developmental design emerged from work I’ve done exploring developmental evaluation in practice in health and social innovation. For years I led a social innovation research unit at the University of Toronto that integrated developmental evaluation with social innovation for health promotion and constantly wrestled with ways to use evidence to inform action. Traditional evidence models are based on positivist social and basic science that aim to hold constant as many variables as possible while manipulating others to enable researchers or evaluators to make cause-and-effect connections. This is a reasonable model when operating in simple systems with few interacting components. However, health promotion and social systems are rarely simple. Indeed, not only are they not simple, they are most often complex (many interactions happening at multiple levels on different timescales simultaneously). Thus, models of evaluation are required that account for complexity.
Doing so requires attention to larger macro-level patterns of activity with a program to assess system-level changes and focus on small, emergent properties that are generated from contextual interactions. Developmental evaluation was first proposed by Michael Quinn Patton who brought together complexity theory with utilization-focused evaluation (PDF) and helped program planners and operators to develop their programs with complexity in mind and supporting innovation. Developmental evaluation provided a means of linking innovation to process and outcomes in a systematic way without creating rigid, inflexible boundaries that are generally incompatible with complex systems.
Developmental evaluation is challenging enough on its own because it requires appreciation of complexity and a flexibility in understanding evaluation, yet also a strong sense of multiple methods of evaluation to accommodate the diversity of inputs and processes that complex systems introduce. However, a further complication is the need to understand how to take that information and apply it meaningfully to the development of the program. This is where design comes in.
Design for better implementation
Design is a field that emerged from the 18th century when mass production was first made possible and no longer was the creative act confined to making unique objects, rather it was expanded to create mass-market ones. Ideas are among the ideas that were mass-produced as the printing press, telegraph and radio combined with the means of creating and distributing these technologies made intellectual products easier to produce as well. Design is what OCADU’s Greg Van Alsytne and Bob Logan refer to as “creation for reproduction” (PDF).
Developmental design links this intention for creation for reproduction and the design for emergence that Van Alsytne and Logan describe with the foundations of developmental evaluation. It links the feedback mechanisms of evaluation with the solution generation that comes from design together.
The field of implementation science emerged from within the health and medical science community after a realization that simple idea sharing and knowledge generation was insufficient to produce change without understanding how such ideas and knowledge were implemented. It came from an acknowledgement that there was a science (or an art) to implementing programs and that by learning how these programs were run and assessed we could do a better job of translating and mobilizing knowledge more effectively. Design is the membrane of sorts that holds all of it together and guides the use of knowledge into the construction and reconstruction of programs. It is the means of holding evaluation data and shaping the program development and implementation questions at the outset.
Without the understanding of how ideas are manifest into a program we are at risk of creating more knowledge and less wisdom, more data and less impact. Just as we made incorrect assumptions that having knowledge was the same as knowing what to do with it or how to share it (which is why fields like knowledge translation and mobilization were born) so too have we made the assumption that program professionals know how to design their programs developmentally. Creating a program from scratch from a blank slate is one thing, but doing a live transformation and re-development is something else.
Developmental design is akin to building a plane while flying it. There are construction skills that are unique to this situation that are different from, but build on, many conventional theories and methods of program planning and evaluation, but like developmental evaluation, extend beyond them to create a novel approach for a particular class of conditions. In future posts I’ll outline some of the concepts of design that are relevant to this enterprise, but in the meantime encourage you to visit the Censemaking Library section on design thinking for some initial resources.
The question remains whether we are building dry docks for ships at sea or platforms for constructing aerial, flexible craft to navigate the changing headwinds and currents?
Image used under license.