Developmental evaluation is only as good as the sense that can be made from the data that is received. To assume that program staff and evaluators know how to do this might be one of the reasons developmental evaluations end up as something less than they promise.
Developmental Evaluation (DE) is becoming a popular subject in the evaluation world. As we see greater recognition of complexity as a factor in program planning and operations and what it means for evaluations it is safe to assume that developmental evaluation will continue to attract interest from program staff and evaluation professionals alike.
Yet, developmental evaluation is as much a mindset as it is a toolset and skillset; all of which are needed to do it well. In this third in a series of posts on developmental evaluation we look at the concept of sensemaking and its role in understanding program data in a DE context.
The architecture of signals and sense
Sensemaking and developmental evaluation involve creating an architecture for knowledge, framing the space for emergence and learning (boundary specification), extracting the shapes and patterns of what lies within that space, and then working to understand the meaning behind those patterns and their significance for the program under investigation. A developmental evaluation with a sensemaking component creates a plan for how to look at a program and learn from what kind of data is generated in light of what has been done and what is to be done next.
Patterns may be knowledge, behaviour, attitudes, policies, physical structures, organizational structures, networks, financial incentives or regulations. These are the kinds of activities that are likely to create or serve as attractors within a complex system.
To illustrate, architecture can be both a literal and figurative term. In a five-year evaluation and study of scientific collaboration at the University of British Columbia’s Life Sciences Institute, my colleagues Tim Huerta, Alison Buchan and Sharon Mortimer and I explored many of these multidimensional aspects of the program* / institution and published our findings in the American Journal of Evaluation and Research Evaluation journals. We looked a spatial configurations by doing proximity measurements that connected where people work to whom they work with and what they generated. Research has indicated that physical proximity makes a difference to collaboration (E.g.,: Kraut et al., 2002). There is relatively little concrete evaluation on the role of space in collaboration, mostly just inferences from network studies (which we also conducted). Few have actually gone into the physical areas and measured distance and people’s locations.
Why mention this? Because from a sense-making perspective those signals provided by the building itself had an enormous impact on the psychology of the collaborations, even if it was only a minor influence on the productivity. The architecture of the networks themselves was also a key variable that went beyond simple exchanges of information, but without seeing collaborations as networks it is possible that we would have never understood why certain activities produced outcomes and others did not.
The same thing exists with cognitive architecture: it is the spatial organization of thoughts, ideas, and social constructions. Organizational charts, culture, policies, and regulations all share in the creation of the cognitive architecture of a program.
Signals and noise
The key is to determine what kind of signals to pay attention to at the beginning. And as mentioned in a previous post, design and design thinking is a good precursor and adjunct to an evaluation process (and, as I’ve argued before and will elaborate on, is integral to effective developmental evaluation). Patterns could be in almost anything and made up of physical, psychological, social and ‘atmospheric’ (org and societal environmental) data.
This might sound a bit esoteric, but by viewing these different domains through an eye of curiousity, we can see patterns that permit evaluators to measure, monitor, observe and otherwise record to use as substance for programs to make decisions based on. This can be qualitative, quantitative, mixed-methods, archival and document-based or some combination. Complex programs are highly context-sensitive, so the sense-making process must include diverse stakeholders that reflect the very conditions in which the data is collected. Thus, if we are involving front-line worker data, then they need to be involved.
The manner in which this is done can be more or less participatory and involved depending on resources, constraints, values and so forth, but there needs to be some perspective taking from these diverse agents to truly know what to pay attention to and determine what is a signal and what is noise. Indeed, it is through this exchange of diverse perspectives that this can be ascertained. For example, a front line worker with a systems perspective may see a pattern in data that is unintelligible to a high-level manager if given the opportunity to look at it. That is what sensemaking can look like in the context of developmental evaluation.
“What does that even mean?”
Sensemaking is essentially the meaning that people give to an experience. Evidence is a part of the sensemaking process, although the manner in which it is used is consistent with a realist approach to science, not a positivist one. Context is critical in the making of sense and the decisions used to act on information gathered from the evaluation. The specific details of the sensemaking process and its key methods are beyond the depth of this post, some key sources and scholars on this topic are listed below. Like developmental evaluation itself, sensemaking is an organic process that brings an element of design, design thinking, strategy and data analytics together in one space. It brings together analysis and synthesis.
From a DE perspective, sensemaking is about understanding what signals and patterns mean within the context of the program and its goals. Even if a program’s goals are broad, there must be some sense of what the program’s purpose is and thus, strategy is a key ingredient to the process of making sense of data. If there is no clearly articulated purpose for the program or a sense of its direction then sensemaking is not going to be a fruitful exercise. Thus, it is nearly impossible to disentangle sensemaking from strategy.
Understanding the system in which the strategy and ideas are to take place — framing — is also critical. An appropriate frame for the program means setting bounds for the system, connecting that to values, goals, desires and hypotheses about outcomes, and the current program context and resources.
Practical sensemaking takes place on a time scale that is appropriate to the complexity of information that sits before the participants in the process. If a sensemaking initiative is done with a complex program that has a rich history and many players involved that history, it is likely that multiple interactions and engagements with participants will be needed to undertake such a process. In part, because the sensemaking process is about surfacing assumptions, revisiting the stated objectives of the program, exploring data in light of those assumptions and goals, and then synthesizing it all to be able to create some means of guiding future action. In some ways, this is about using hindsight and present sight to generate foresight.
Sensemaking is not just about meaning-making, but also a key step in change making for future activities. Sensemaking realizes one of the key aspects of complex systems: that meaning is made in the interactions between things and less about the things themselves.
Building the plane while flying it
In some cases the sense made from data and experience can only be made in the moment. Developmental evaluation has been called “real time” evaluation by some to reflect the notion that evaluation data is made sense of as the program unfolds. To draw on a metaphor illustrated in the video below, sensemaking in developmental evaluation is somewhat like building the plane while flying it.
Like developmental evaluation as a whole, sensemaking isn’t a “one-off” event, rather it is an ongoing process that requires attention throughout the life-cycle of the evaluation. As the evaluator and evaluation team build capacity for sensemaking, the process gets easier and less involved each time its done as the program builds its connection both to its past and present context. However, such connections are tenuous without a larger focus on building in mindfulness to the program — whether organization or network — to ensure that reflections and attention is paid to the activities on an ongoing basis consistent with strategy, complexity and the evaluation itself.
We will look at the role of mindfulness in an upcoming post. Stay tuned.
* The Life Sciences Institute represented a highly complicated program evaluation because it was simultaneously bounded as a physical building, a corporal institution within a larger institution, and a set of collaborative structures that were further complicated by having investigator-led initiatives combined with institutional-level ones where individual investigators were both independent and collaborative. Taken together it was what was considered to be a ‘program’.
References & Further Reading:
Dervin, B. (1983). An overview of sense-making research: Concepts, methods and results to date. International Communication Association Meeting, 1–13.
Klein, G., & Moon, B. (2006). Making sense of sensemaking 1: Alternative perspectives. Intelligent Systems. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667957
Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(4),
Kolko, J. (2010a). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis. In Proceedings of the 2010 Design Research Society Montreal Conference on Design & Complexity. Montreal, QC.
Kolko, J. (2010b). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis Understanding Sensemaking and the Role of Perspective in Framing Jon Kolko » Interaction design and design synthesis . In 2010 Design Research Society (DRS) international conference: Design & Complexity (pp. 6–11). Montreal, QC.
Kraut, R., Fussell, S., Brennan, S., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. Distributed work, 137–162. Retrieved from NCI.
Mills, J. H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: the critical sensemaking approach. Qualitative Research in Organizations and Management An International Journal, 5(2), 182–195.
Rowe, A., & Hogarth, A. (2005). Use of complex adaptive systems metaphor to achieve professional and organizational change. Journal of advanced nursing, 51(4), 396–405.
Norman, C. D., Huerta, T. R., Mortimer, S., Best, A., & Buchan, A. (2011). Evaluating discovery in complex systems. American Journal of Evaluation, 32(1), 70–84.
Weick, K. E. (1995). The Nature of Sensemaking. In Sensemaking in Organizations (pp. 1–62). Sage Publications.
Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421.
Developmental evaluation is an approach to understanding and shaping programs in service of those who wish to grow and evolve what is done in congruence with complexity rather than ignoring it. This requires not only feedback (evaluation), but skills in using that feedback to shape the program (design) for without both, we may end up doing neither.
A program operating in an innovation space, one that requires adaptation, foresight and feedback to make adjustments on-the-fly is one that needs developmental design. Developmental design is part of an innovator’s mindset that combines developmental evaluation with design theory, methods and practice. Indeed, I would argue that exceptional developmental evaluations are by their definition examples of developmental design.
Connecting design with developmental evaluation
The idea of developmental design emerged from work I’ve done exploring developmental evaluation in practice in health and social innovation. For years I led a social innovation research unit at the University of Toronto that integrated developmental evaluation with social innovation for health promotion and constantly wrestled with ways to use evidence to inform action. Traditional evidence models are based on positivist social and basic science that aim to hold constant as many variables as possible while manipulating others to enable researchers or evaluators to make cause-and-effect connections. This is a reasonable model when operating in simple systems with few interacting components. However, health promotion and social systems are rarely simple. Indeed, not only are they not simple, they are most often complex (many interactions happening at multiple levels on different timescales simultaneously). Thus, models of evaluation are required that account for complexity.
Doing so requires attention to larger macro-level patterns of activity with a program to assess system-level changes and focus on small, emergent properties that are generated from contextual interactions. Developmental evaluation was first proposed by Michael Quinn Patton who brought together complexity theory with utilization-focused evaluation (PDF) and helped program planners and operators to develop their programs with complexity in mind and supporting innovation. Developmental evaluation provided a means of linking innovation to process and outcomes in a systematic way without creating rigid, inflexible boundaries that are generally incompatible with complex systems.
Developmental evaluation is challenging enough on its own because it requires appreciation of complexity and a flexibility in understanding evaluation, yet also a strong sense of multiple methods of evaluation to accommodate the diversity of inputs and processes that complex systems introduce. However, a further complication is the need to understand how to take that information and apply it meaningfully to the development of the program. This is where design comes in.
Design for better implementation
Design is a field that emerged from the 18th century when mass production was first made possible and no longer was the creative act confined to making unique objects, rather it was expanded to create mass-market ones. Ideas are among the ideas that were mass-produced as the printing press, telegraph and radio combined with the means of creating and distributing these technologies made intellectual products easier to produce as well. Design is what OCADU’s Greg Van Alsytne and Bob Logan refer to as “creation for reproduction” (PDF).
Developmental design links this intention for creation for reproduction and the design for emergence that Van Alsytne and Logan describe with the foundations of developmental evaluation. It links the feedback mechanisms of evaluation with the solution generation that comes from design together.
The field of implementation science emerged from within the health and medical science community after a realization that simple idea sharing and knowledge generation was insufficient to produce change without understanding how such ideas and knowledge were implemented. It came from an acknowledgement that there was a science (or an art) to implementing programs and that by learning how these programs were run and assessed we could do a better job of translating and mobilizing knowledge more effectively. Design is the membrane of sorts that holds all of it together and guides the use of knowledge into the construction and reconstruction of programs. It is the means of holding evaluation data and shaping the program development and implementation questions at the outset.
Without the understanding of how ideas are manifest into a program we are at risk of creating more knowledge and less wisdom, more data and less impact. Just as we made incorrect assumptions that having knowledge was the same as knowing what to do with it or how to share it (which is why fields like knowledge translation and mobilization were born) so too have we made the assumption that program professionals know how to design their programs developmentally. Creating a program from scratch from a blank slate is one thing, but doing a live transformation and re-development is something else.
Developmental design is akin to building a plane while flying it. There are construction skills that are unique to this situation that are different from, but build on, many conventional theories and methods of program planning and evaluation, but like developmental evaluation, extend beyond them to create a novel approach for a particular class of conditions. In future posts I’ll outline some of the concepts of design that are relevant to this enterprise, but in the meantime encourage you to visit the Censemaking Library section on design thinking for some initial resources.
The question remains whether we are building dry docks for ships at sea or platforms for constructing aerial, flexible craft to navigate the changing headwinds and currents?
Image used under license.