Category: design thinking

art & designdesign thinkingfood systemssocial systemssystems thinking

“If You Build It..”: A Reflection on A Social Innovation Story

If You Build it is documentary about a social innovation project aimed at cultivating design skills with youth to tackle education and social issues in a economically challenged community in North Carolina. The well-intentioned, well-developed story is not unfamiliar to those interested in social innovation, but while inspiring to some these stories mask bigger questions about the viability, opportunity and underlying systems issues that factor into the true impact of these initiatives. (Note: Spoiler alert > this essay will discuss the film and plotlines, yet hopefully won’t dissuade you from seeing a good film). 

Last week I had the opportunity to see Patrick Creadon‘s terrific new documentary “If You Build It” at the Bloor Hot Docs Cinema in Toronto as part of the monthly Doc Soup screening series. It was a great night of film, discussion and popcorn that inspired more than just commentary about the film, but the larger story of social innovation that the film contributes to.

If You Build It is the story of the Project H studio that was developed in in Bertie County (click here for a film outline) and run for two years by Emily Pilloton and her partner Matthew Miller. To learn more about the start of the story and the philosophy behind Project H, Emily’s TED talk is worth the watch:

It’s largely a good-news kind of story of how design can make a difference to the lives of young people and potentially do good for a community at the same time. While it made for a great doc and some inspiring moments, the film prompted thoughts about what goes on beyond the narrative posed by the characters, the community and those seeking to innovate through design and education.

Going beyond the story

Stories are often so enjoyable because they can be told in multiple ways with emphasis placed on different aspects of the plot, the characters and the outcomes. For this reason, they are both engaging and limiting as tools for making decisions and assessing impact of social interventions. It’s why ‘success stories’ are problematic when left on their own.

One of the notable points that was raised in the film is that the cost of the program was $150,000 (US), which was down from the original budget of $230,000 because Emily and Matthew (and later on, a close friend who helped out in the final few months) all didn’t take a salary. This was funded off of grants. Three trained designers and builders worked to teach students, build a farmers market, and administer the program for no cost at all.

The film mentions that the main characters — Matthew and Emily — live off credit, savings and grants (presumably additional ones?) to live off of. While this level of commitment to the idea of the Bertie County project is admirable, it’s also not a model that many can follow. Without knowing anything about their family support, savings or debt levels, the idea of coming out of school and working for free for two years is out of reach of most young, qualified designers of any discipline. It also — as we see in the film — not helpful to the larger cause as it allows Bertie County yo abdicates responsibility for the project and lessens their sense of ownership over the outcomes.

One segment of If You Build It looks back on Matthew’s earlier efforts to apply what he learned at school to provide a home for a family in Detroit, free of charge in 2007. Matthew built it himself and gave it to a family with the sole condition that they pay the utilities and electricity bills, which amounted to less than this family was paying in just rent at the time. That part of the story ends when Matthew returns to the home a few years later to find the entire inside gutted and deserted long after having to evict that original family 9 months after they took possession when they failed to pay even a single bill as agreed.

From Bertie County to Detroit and back

The Detroit housing experience is a sad story and there is a lot of context we don’t get in the film, but two lessons taught from that experience are repeated  in the story in Bertie County. In both cases, we see something offered that wasn’t necessarily asked for, with no up-front commitment of investment and the influence that the larger system has on the outcomes.

In Detroit, a family was offered a house, yet they were transplanted into a neighbourhood that is (like many in Detroit) sparsely populated, depressed, and without much infrastructure to enable a family to make the house a home easily. Detroit is still largely a city devoted to the automobile and there are wide swaths of the city where there is one usable home on every three or four lots. It’s hard to conceive of that as a neighbourhood. Images like the one taken below are still common in many parts of the city even though it is going through a notable re-energizing shift.

Rebirth of Detroit

Rebirth of Detroit

In the case of Bertie County, the same pattern repeats in a different form. The school district gets an entire program for free even to the point of refusing to pay for salaries for the staff (Emily and Matthew) over two years, after the initial year ended with the building of a brand-new farmers market pavillion that was fully funded by Project H and its grants.

The hypothesis ventured by Patrick Creadon when he spoke to the Doc Soup audience in Toronto was that there was some resentment at the project (having been initiated by a change-pushing school superintendent who was let go at the film’s start and was the one who brought Emily and Matthew to the community) and by some entrenched beliefs about education and the way things were done in that community.

Systems + Change = Systems Change (?)

There is a remarkably romantic view of how change happens in social systems. Bertie County received a great deal without providing much in the way of support. While the Studio H project had some community cheerleaders like the mayor and a few citizens, it appeared from the film that the community – and school board — was largely disengaged from the activities at Studio H. This invokes memories of Hart’s Ladder of Participation, (PDF) which is applied to youth, but works for communities, too. When there is a failure to truly collaborate, the ownership of the problem and solution are not shared.

At no time in the film do you get a sense of a shared ownership of the problem and solution between Studio H, the school board, and the community. While the ideas were rooted in design research, the community wasn’t invested — literally — in solving their problems (through design, at least). It represents a falsehood of design research that says you can understand a community’s needs and address it successfully through simple observation, interviews and data gathering.

Real, deep research is hard. It requires understanding not just the manifestations of the system, but the system itself.

Systems Iceberg

Systems Iceberg

Very often that kind of analysis is missing from these kinds of stories, which make for great film and books, but not for long-term success.

In a complex system, meaning and knowledge is gained through interactions, thus we need stories and data that reflect what kind of interactions take place and under what conditions. Looking at the systems iceberg model above, the tendency is to focus on the events (the Studio H’s), but often we need to look at the structures beneath.

To be sure, there is a lot to learn from Studio H now and from the story presented in If You Build It. The lesson is in the prototyping: Emily and Matthew provide a prototype that shows us it can be done and what kind of things we can learn from it. The mistake is trying to replicate Studio H as it is represented in the film, rather than seeing it as a prototype.

In the post-event Q & A with the audience, a well-intentioned gentleman working with school-building in Afghanistan asked Patrick Creadon how or whether he could get Emily and Matthew to come there and help (with pay) and Creadon rightly answered that there are Emilys and Matthews all over the place and that they are worth connecting to.

Creadon is half right. There are talented, enthusiastic people out there who can learn from the experience of the Studio H team, but probably far fewer who have the means to assume the risk that Emily and Matthew did. Those are the small details that separate out a good story from a sustainable, systemic intervention that really innovates in a way that changes the system. But its a start.

If You Build It is in theatres across North America.

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.

complexitydesign thinkingemergenceevaluationsystems thinking

Developmental Evaluation and Sensemaking

Sensing Patterns

Sensing Patterns, Seeing Pathways

Developmental evaluation is only as good as the sense that can be made from the data that is received. To assume that program staff and evaluators know how to do this might be one of the reasons developmental evaluations end up as something less than they promise. 

Developmental Evaluation (DE) is becoming a popular subject in the evaluation world. As we see greater recognition of complexity as a factor in program planning and operations and what it means for evaluations it is safe to assume that developmental evaluation will continue to attract interest from program staff and evaluation professionals alike.

Yet, developmental evaluation is as much a mindset as it is a toolset and skillset; all of which are needed to do it well. In this third in a series of posts on developmental evaluation we look at the concept of sensemaking and its role in understanding program data in a DE context.

The architecture of signals and sense

Sensemaking and developmental evaluation involve creating an architecture for knowledge,  framing the space for emergence and learning (boundary specification), extracting the shapes and patterns of what lies within that space, and then working to understand the meaning behind those patterns and their significance for the program under investigation. A developmental evaluation with a sensemaking component creates a plan for how to look at a program and learn from what kind of data is generated in light of what has been done and what is to be done next.

Patterns may be knowledge, behaviour, attitudes, policies, physical structures, organizational structures, networks, financial incentives or regulations. These are the kinds of activities that are likely to create or serve as attractors within a complex system.

To illustrate, architecture can be both a literal and figurative term. In a five-year evaluation and study of scientific collaboration at the University of British Columbia’s Life Sciences Institute, my colleagues Tim Huerta, Alison Buchan and Sharon Mortimer and I explored many of these multidimensional aspects of the program* / institution and published our findings in the American Journal of Evaluation and Research Evaluation journals. We looked a spatial configurations by doing proximity measurements that connected where people work to whom they work with and what they generated. Research has indicated that physical proximity makes a difference to collaboration (E.g.,: Kraut et al., 2002). There is relatively little concrete evaluation on the role of space in collaboration, mostly just inferences from network studies (which we also conducted). Few have actually gone into the physical areas and measured distance and people’s locations.

Why mention this? Because from a sense-making perspective those signals provided by the building itself had an enormous impact on the psychology of the collaborations, even if it was only a minor influence on the productivity. The architecture of the networks themselves was also a key variable that went beyond simple exchanges of information, but without seeing collaborations as networks it is possible that we would have never understood why certain activities produced outcomes and others did not.

The same thing exists with cognitive architecture: it is the spatial organization of thoughts, ideas, and social constructions. Organizational charts, culture, policies, and regulations all share in the creation of the cognitive architecture of a program.

Signals and noise

The key is to determine what kind of signals to pay attention to at the beginning. And as mentioned in a previous post, design and design thinking is a good precursor and adjunct to an evaluation process (and, as I’ve argued before and will elaborate on, is integral to effective developmental evaluation). Patterns could be in almost anything and made up of physical, psychological, social and ‘atmospheric’ (org and societal environmental) data.

This might sound a bit esoteric, but by viewing these different domains through an eye of curiousity, we can see patterns that permit evaluators to measure, monitor, observe and otherwise record to use as substance for programs to make decisions based on. This can be qualitative, quantitative, mixed-methods, archival and document-based or some combination. Complex programs are highly context-sensitive, so the sense-making process must include diverse stakeholders that reflect the very conditions in which the data is collected. Thus, if we are involving front-line worker data, then they need to be involved.

The manner in which this is done can be more or less participatory and involved depending on resources, constraints, values and so forth, but there needs to be some perspective taking from these diverse agents to truly know what to pay attention to and determine what is a signal and what is noise. Indeed, it is through this exchange of diverse perspectives that this can be ascertained. For example, a front line worker with a systems perspective may see a pattern in data that is unintelligible to a high-level manager if given the opportunity to look at it. That is what sensemaking can look like in the context of developmental evaluation.

“What does that even mean?” 

Sensemaking is essentially the meaning that people give to an experience. Evidence is a part of the sensemaking process, although the manner in which it is used is consistent with a realist approach to science, not a positivist one. Context is critical in the making of sense and the decisions used to act on information gathered from the evaluation. The specific details of the sensemaking process and its key methods are beyond the depth of this post, some key sources and scholars on this topic are listed below. Like developmental evaluation itself, sensemaking is an organic process that brings an element of design, design thinking, strategy and data analytics together in one space. It brings together analysis and synthesis.

From a DE perspective, sensemaking is about understanding what signals and patterns mean within the context of the program and its goals. Even if a program’s goals are broad, there must be some sense of what the program’s purpose is and thus, strategy is a key ingredient to the process of making sense of data. If there is no clearly articulated purpose for the program or a sense of its direction then sensemaking is not going to be a fruitful exercise. Thus, it is nearly impossible to disentangle sensemaking from strategy.

Understanding the system in which the strategy and ideas are to take place — framing — is also critical. An appropriate frame for the program means setting bounds for the system, connecting that to values, goals, desires and hypotheses about outcomes, and the current program context and resources.

Practical sensemaking takes place on a time scale that is appropriate to the complexity of information that sits before the participants in the process. If a sensemaking initiative is done with a complex program that has a rich history and many players involved that history, it is likely that multiple interactions and engagements with participants will be needed to undertake such a process. In part, because the sensemaking process is about surfacing assumptions, revisiting the stated objectives of the program, exploring data in light of those assumptions and goals, and then synthesizing it all to be able to create some means of guiding future action. In some ways, this is about using hindsight and present sight to generate foresight.

Sensemaking is not just about meaning-making, but also a key step in change making for future activities. Sensemaking realizes one of the key aspects of complex systems: that meaning is made in the interactions between things and less about the things themselves.

Building the plane while flying it

In some cases the sense made from data and experience can only be made in the moment. Developmental evaluation has been called “real time” evaluation by some to reflect the notion that evaluation data is made sense of as the program unfolds. To draw on a metaphor illustrated in the video below, sensemaking in developmental evaluation is somewhat like building the plane while flying it.

Like developmental evaluation as a whole, sensemaking isn’t a “one-off” event, rather it is an ongoing process that requires attention throughout the life-cycle of the evaluation. As the evaluator and evaluation team build capacity for sensemaking, the process gets easier and less involved each time its done as the program builds its connection both to its past and present context. However, such connections are tenuous without a larger focus on building in mindfulness to the program — whether organization or network — to ensure that reflections and attention is paid to the activities on an ongoing basis consistent with strategy, complexity and the evaluation itself.

We will look at the role of mindfulness in an upcoming post. Stay tuned.

* The Life Sciences Institute represented a highly complicated program evaluation because it was simultaneously bounded as a physical building, a corporal institution within a larger institution, and a set of collaborative structures that were further complicated by having investigator-led initiatives combined with institutional-level ones where individual investigators were both independent and collaborative. Taken together it was what was considered to be a ‘program’.
References & Further Reading:

Dervin, B. (1983). An overview of sense-making research: Concepts, methods and results to date. International Communication Association Meeting, 1–13.

Klein, G., & Moon, B. (2006). Making sense of sensemaking 1: Alternative perspectives. Intelligent Systems. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667957

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(4),

Kolko, J. (2010a). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis. In Proceedings of the 2010 Design Research Society Montreal Conference on Design & Complexity. Montreal, QC.

Kolko, J. (2010b). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis Understanding Sensemaking and the Role of Perspective in Framing Jon Kolko » Interaction design and design synthesis . In 2010 Design Research Society (DRS) international conference: Design & Complexity (pp. 6–11). Montreal, QC.

Kraut, R., Fussell, S., Brennan, S., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. Distributed work, 137–162. Retrieved from NCI.

Mills, J. H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: the critical sensemaking approach. Qualitative Research in Organizations and Management An International Journal, 5(2), 182–195.

Rowe, A., & Hogarth, A. (2005). Use of complex adaptive systems metaphor to achieve professional and organizational change. Journal of advanced nursing, 51(4), 396–405.

Norman, C. D., Huerta, T. R., Mortimer, S., Best, A., & Buchan, A. (2011). Evaluating discovery in complex systems. American Journal of Evaluation32(1), 70–84.

Weick, K. E. (1995). The Nature of Sensemaking. In Sensemaking in Organizations (pp. 1–62). Sage Publications.

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421.

Photo by the author of Happy Space: NORA, An interactive Study Model at the Museum of Finnish Architecture, Helsinki, FI.

complexitydesign thinkingevaluationinnovationsystems thinking

Developmental Evaluation and Complexity

Stitch of Complexity

Stitch of Complexity

Developmental evaluation is an approach (much like design thinking) to program assessment and valuation in domains of high complexity, change, and innovation. These three terms are used often, but poorly understood in real terms for evaluators to make much use of. This first in a series looks at the term complexity and what it means in the context of developmental evaluation. 

Science writer and professor Neil Johnson  is quoted as saying: “even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples…” and that his definition of a science of complexity (PDF) is:  “the study of the phenomena which emerge from a collection of interacting objects.”  The title of his book Two’s Company, Three’s Complexity hints at what complexity can mean to anyone who’s tried to make plans with more than one other person.

The Oxford English Dictionary defines complexity as:

complexity |kəmˈpleksitē|

noun (pl. complexities)

the state or quality of being intricate or complicated: an issue of great complexity.

• (usu. complexities) a factor involved in a complicated process or situation: the complexities of family life.

For social programs, complexity involves multiple overlapping sources of input and outputs that interact with systems in dynamic ways at multiple time scales and organizational levels in ways that are highly context-dependent. Thats a mouthful.

Developmental evaluation is intended to be an approach that takes complexity into account, however that also means that evaluators and the program designers that they work with need to understand some basics about complexity. To that end, here are some key concepts to start that journey.

Key complexity concepts

Complexity science is a big and complicated domain within systems thinking that brings together elements of system dynamics, organizational behaviour, network science, information theory, and computational modeling (among others).  Although complexity has many facets, there are some key concepts that are of particular relevance to program designers and evaluators, which will be introduced with discussion on what they mean for evaluation.

Non-linearity: The most central start point for complexity is that it is about non-linearity. That means prediction and control is often not possible, perhaps harmful, or at least not useful as ideas for understanding programs operating in complex environments. Further complicating things is that within the overall non-linear environment there exist linear components. It doesn’t mean that evaluators can’t use any traditional means of understanding programs, instead it means that they need to consider what parts of the program are amenable to linear means of intervention and understanding within the complex milieu. This means surrendering the notion of ongoing improvement and embracing development as an idea. Michael Quinn Patton has written about this distinction very well in his terrific book on developmental evaluation. Development is about adaptation to produce advantageous effects for the existing conditions, improvement is about tweaking the same model to produce the same effects across conditions that are assumed to be stable.

Feedback: Complex systems are dynamic and that dynamism is created in part from feedback. Feedback is essentially information that comes from the systems’ history and present actions that shape the immediate and longer-term future actions. An action leads to an effect which is sensed, made sense of, which leads to possible adjustments that shape future actions. For evaluators, we need to know what feedback mechanisms are in place, how they might operate, and what (if any) sensemaking rubrics, methods and processes are used with this feedback to understand what role it has in shaping decisions and actions about a program. This is important because it helps track the non-linear connections between causes and effects allowing the evaluator to understand what might emerge from particular activities.

Emergence: What comes from feedback in a complex system are new patterns of behaviour and activity. Due to the ongoing, changing intensity, quantity and quality of information generated by the system variables, the feedback may look different each time an evaluator looks at it. What comes from this differential feedback can be new patterns of behaviour that are dependent on the variability in the information and this is called emergence. Evaluation designs need to be in place that enable the evaluator to see emergent patterns form, which means setting up data systems that have the appropriate sensitivity. This means knowing the programs, the environments they are operating in, and doing advanced ‘ground-work’ preparing for the evaluation by consulting program stakeholders, the literature and doing preliminary observational research. It requires evaluators to know — or at least have some idea — of what the differences are that make a difference. That means knowing first what patterns exist, detecting what changes in those patterns, and understanding if those changes are meaningful.

Adaptation: With these new patterns and sensemaking processes in place, programs will consciously or unconsciously adapt to the changes created through the system. If a program itself is operating in an environment where complexity is part of the social, demographic, or economic environment even a stable, consistently run program will require adaptation to simply stay in the same place because the environment is moving. This means sufficiently detailed record-keeping is needed — whether through program documents, reflective practice notes, meeting minutes, observations etc.. — to monitor what current practice is, link it with the decisions made using the feedback, emergent conditions and sensemaking from the previous stages and then tracking what happens next.

Attractors: Not all of the things that emerge are useful and not all feedback is supportive of advancing a program’s goals. Attractors are patterns of activity that generate emergent behaviours and ‘attract’ resources — attention, time, funding — in a program. Developmental evaluators and their program clients seek to find attractors that are beneficial to the organization and amplify those to ensure sustained or possibly greater benefit. Negative (unhelpful) attractors do the opposite and thus knowing when those form it enables program staff to dampen their effect by adapting activities to adjust and shift these activities.

Self-organization and Co-evolution: Tied with all of this is the concepts of self-organization and co-evolution. The previous concepts all come together to create systems that self-organize around these attractors. Complex systems do not allow us to control and predict behaviour, but we can direct actions, shape the system to some degree, and anticipate possible outcomes. Co-evolution is a bit of a misnomer in that it refers to the principles that organisms (and organizations) operating in complex environments are mutually affected by each other. This mutual influence might be different for each interaction, differently effecting each organization/organism as well, but it points to the notion that we do not exist in a vacuum. For evaluators, this means paying attention to the system(s) that the organization is operating in. Whereas with normative, positivist science we aim to reduce ‘noise’ and control for variation, in complex systems we can’t do this. Network research, system mapping tools like causal loop diagrams and system dynamics models, gigamapping, or simple environmental scans can all contribute to the evaluation to enable the developmental evaluator to know what forces might be influencing the program.

Ways of thinking about complexity

One of the most notable challenges for developmental evaluators and those seeking to employ developmental evaluation is the systems thinking about complexity. It means accepting non-linearity as a key principle in viewing a program and its context. It also means that context must be accounted for in the evaluation design. Simplistic assertions about methodological approaches (“I’m a qualitative evaluator / I’m a quantitative evaluator“) will not work. Complex programs require attention to the macro level contexts and moment-by-moment activities simultaneously and at the very least demand mixed method approaches to their understanding.

Although much of the science of complexity is based on highly mathematical, quantitative science, it’s practice as a means of understanding programs is quantitative and qualitative and synthetic. It requires attention to context and the nuances that qualitative methods can reveal and the macro-level understanding that quantitative data can produce from many interactions.

It also means getting away from language about program improvements towards one of development and that might be the hardest part of the entire process. Development requires adaptation to the program, thought and rethinking about the program’s resources and processes that integrate feedback into an ongoing set of adjustments that perpetuate through the life cycle of the program. This requires a different kind of attention,  methods, and commitment from both a program and its evaluators.

In the coming posts I’ll look at how this attention gets realized in designing and redesigning the program as we move into developmental design.

 

behaviour changecomplexitydesign thinkinginnovationpsychology

The Organizational Zombie Resistance Kit

How to thwart a zombie

How to thwart a zombie

Zombies — unaware, semi-conscious, distracted individuals — are all around us and running many of the organizations we work in or with. And just like combatting real zombies there is a need to target the head.

There is much musing about what a zombie apocalypse might look like, but anyone paying attention to what is going on around them might not have to imagine what that looks like as they’d be forgiven for thinking it is already here. Whether its people glued to cellphones while walking/running/biking/driving, asking ‘dumb’ questions immediately following the answer, or scientists lazily allowing junk to pass peer review, we are surrounded by zombie-like behaviour.

As discussed in a previous post, the zombies are already here. A zombie in this context exhibits mindless attention in a manner that restricts awareness and appreciation of one’s immediate context and the larger system to which that behaviour occurs. Zombies are great fodder for horror movies, but lousy companions on the journey of life and even worse problem solvers. Building resistance to them involves more than just aiming for the head, it means aiming for the heart (of an organization). Thankfully, there are methods and tools that can do that and thus, CENSEMaking brings you the Zombie Resistance Kit.

Building resistance to zombies

I am a professional zombie hunter. I do this by helping organizations to be more mindful. A mindful organization is aware of where it sits in the systems it inhabits, connects the current context to its past, and from those places envisions paths to futures not yet realized; it is part psychology, part strategic foresight, and part research and evaluation. How it expresses this knowledge into value is design.

Building a mindful organization — one resistant to zombies — requires inoculation through awareness. There are eight broad areas of attention.

1. Grounding is a process of holding to where you are by first revealing to yourself where that is. It is about locating yourself within the system you are in and connecting to your history. Mindfulness is often seen as being focused on the present moment, but not at the expense of the past. Understanding the path you took to get to the present allows you to see path dependencies and habits and mindfully choose whether such pathways are beneficial and how they relate to the larger system. Surfacing assumptions and system mapping are key methods and tools to aid in the process of grounding an organization.

2.  Attunement is a means of syncing yourself to the environment, your role within it (after having been grounded) and increasing your receptor capacity for sensing and learning. It is about calibrating ones mission, vision, and strategy with the system purposefully and intentionally building your awareness for understanding how harmonious they are for your organization. When attuned to what is going on — literally being tuned into the signals around you — the potential to see and process both strong and weak signals is heightened, increasing sensemaking and sensing capability at the same time. The ability to see the system and understand what it means for who you are and what you do is a terrific means of combating zombie-like thinking.

3. Discovery: Encouraging curiosity and promoting a culture of inquiry is another key means of enhancing awareness. Kids are constantly amazed by the things they see and experience everyday. The world is no less amazing today than it is was when we were kids, but the pressures to act and ‘be’ particular ways can greatly inhibit the natural curiosity that we all have about what is going on around us. Encouraging discovery and asking critical questions about what we find is a means of enhancing overall engagement with the raw materials of our enterprise. It is risky because it might call into question some long-held assumptions that are no longer true, but if people are genuinely supported in asking these questions an organization increases the number of ‘sensors’ it has in it across conditions, roles and sectors generating new, context-ready knowledge that can seed innovation and enhance overall resiliency.

4. Creativity: Application of creative methods of problem finding, framing and solving via design thinking is a means of promoting engagement and seeing systems solutions. Design thinking can be a means of creative facilitation that guides mindful development, discovery, synthesis and solution proposals. Encouraging generation of ideas of all types, firsthand research, creation of prototypes, and the opportunity to test these prototypes in practice allow for individuals to claim legitimate ownership of the problem space and the solution space. This ownership is what creates true investment in the work and its outcomes, which is what zombies lack.

5. Strategic Foresight: By envisioning not only what a design can produce in the short-term, but see a future for what is created today into the years ahead, we build commitment to long-term goals. Strategic foresight brings together all of the preceding components to start envisioning what possible futures might look like so that an organization can better prepare for them or even create them. Strategic foresight is a structured means of visualizing possible futures based on current trends, data-driven projections, models and strategic priorities of the organization and connects the present activities to the past and projects possible futures from all of this giving the zombie a reason to stop its relentless blind pursuit of an unaware present goal.

6. Focus: While creative thinking is useful in enhancing divergent perspective taking and seeing new possibilities, focus allows for attendance to the critical path and refinement of strategy to fit the context, desires, capacity and intentions. Of the many futures that a strategic foresight process might produce, focusing the energy on those that are the most beneficial, congruent with goals and desires, and synchronous with the systems that an organization engages is another way to shock mindless thinking out of its zombie-like state. A focus provides a richer experience and something to strive for.

7. Knowledge integration. Introducing possibilities, building a creative culture, enhancing receptor capacity and building a focus is not sustainable if knowledge isn’t integrated throughout the process of moving forward; it is the knowledge practice behind developmental design.  Knowledge integration involves critically examining the organizational structure and culture to observe current knowledge practices. Do you have the right tools? The ability to use those tools effectively and make sense of the findings? Is the system understood and aligned to the purpose and resources available? When your system is aligned and the structures are put into place to work with that alignment knowledge is put to use.

8. Design Cycling: Developmental design is the means of engaging in ongoing evaluation and design simultaneously, while knowledge integration is taking the learning from those products and incorporating it into the DNA of the organization. Design cycling is the process by which this unfolds and iteratively repeats over cycles of innovation. Invariably, organizations tend to drift a little and by framing the innovation process as a cycle it acknowledges that even the best ideas will reach an ebb and flow and require renewal. This cyclical process encourages us to return to the first stage. This is an approach consistent with the Panarchy approach to life cycle development in complex systems. Everything runs its course.  This approach is consistent with a natural systems perspective and a pillar of the work on sustainable development in natural systems.

This model of development and organizational awareness provides balm against zombie-like behaviour. It gets people excited, it produces visible results that can be scrutinized in a transparent way, and it heightens engagement by bringing everyone in an organization into the role of problem framing, finding and solving. It enhances accountability for everyone who are now enlisted as creators, researchers, designers, and sensemakers.

By being more aware and alive we better engage brains rather than use that grey matter as food for zombies.

For more details on using this approach with your organization contact CENSE Research + Design.

Photo credit: From Zombie Walk 2012 SP collection by Gianluca Ramahlo Misiti used under Creative Commons Licence

art & designbehaviour changedesign thinking

The Literacy of Change

Making Change

Making Change

It has been said that change is the only constant, yet for something so pervasive change is a remarkably thorny and poorly understood concept. One reason is that change is often approached as a set of facts about a static state of affairs instead of as a literacy, which at change in a far more dynamic context that reflects human systems more accurately. 

A summary of the literature on behaviour change research shows an enormous push towards cognitive rational models of thinking and acting. In brief, a cognitive rational approach to change posits that we acquire information; process that information’s relevance, timeliness, (possible) completeness*; and fit with our personal context and then usually make plans to change. These plans, as described in the Transtheoretical Model and related theories include a series of stages of change:

  1. Precontemplation: We are not even thinking about making a change
  2. Contemplation: We are considering making a change in the next 6-months
  3. Preparation: We are currently planning to make a change in the next 30 days and are taking some steps toward making that change a reality
  4. Action: We are actively working to make the change happen
  5. Maintenance: We are strengthening the new changed state

To the rational mind, this makes sense. Like other cognitive rational models of change like the Health Belief Model, Theory of Reasoned Action and Planned Behaviour, and others, this model assumes change is linear, straightforward and planned for. While some changes are planned, many are not. In a review of the literature on smoking cessation, Robert West and colleagues found that more than half of smokers quit without going through these stages or having any quit plan in place.

West and colleagues have since developed a model called the Behaviour Change Wheel that aims for a more nuanced, less prescriptive model for understanding change.  The Wheel is a refreshing model within a sea of linear stage or stepped approaches, yet still is an approach that encourages behaviour change to be deconstructed into parts first, then assembled to a whole. Complex systems — like much of what humans create individually and socially — are highly resistant to having linear frameworks imposed on them.  This is why many approaches to change fail to meet expectations.

Viewing change as a literacy

What if we stepped back and took a look at change a little differently.

Language is among the most adaptive, dynamic, change-oriented system we engage with on any day thus, it might provide a model for considering change. If I am trying to explain something to you there are myriad ways to do it. By using my words in different combinations, with different tones and cadences, and add some visual thinking to the mix and I create a potent method for communication.

Learning a language requires a lot of failure, many awkward phrases, and a complex array of trial and error, repetition, and luck we eventually learn how to communicate. Depending on the task at hand, we learn the language skill that matters. For someone looking to learn how to navigate a tourist area in another country, a set of basic phrases might suffice. For someone else, getting a technical job in a global organization might require a far deeper understanding of the language. In our efforts to change ourselves, our organizations and our communities we will find ourselves requiring different levels of skill and ‘good enough’ will look different from situation to situation depending on who we are talking with.

Frequently, we are talking to different audiences at the same time to take the metaphor further.

Design (thinking) change

We learn languages much like designers apply design thinking. We scope out our intended purpose, craft a strategy, prototype, evaluate and redesign accordingly until we get something right. Over time we get enough feedback to understand deeper patterns that allow for more sophisticated experimentation. If the feedback quality is high, we learn faster and more deeply.

Learning is by its very nature change. Language is applied learning; you can’t ‘theoretically’ learn a language — it’s useless without application.

Language learning also requires attention to learning on multiple levels simultaneously. It involves small, moment-to-moment interactions, something I call ‘micro-feedback’. This is akin to mindfulness-based approaches to innovation. By establishing a sensitivity to the situation, creating the means to reflect often on what is happening while it is happening without judgement, we create the means to observe ourselves behave and make modifications as we go. It allows us to provide the kind of rapid prototyping of our learning as we go.

It also requires macro-feedback mechanisms that focus on the bigger picture. Language is not just about words and sentences, but a larger way of phrasing that separates being ‘well spoke’ for just ‘speaking well’.

Viewing change developmentally

If change is a constant, viewing change as a persistent, ongoing process changes the way we look at it. By looking at change as a core feature of a complex system — one that brings together linear and non-linear elements, is constantly evolving, and involves random, planned and hybrid elements – we can see change in a manner that fits with complexity, rather than works against it.

Just as our language develops over time, so does the way we change. Viewing change as something that parallels the way we approach language might just be truer to the way we really change the way we want change to happen.

References:

Bridle, C., Riemsma, R. P., Pattenden, J., Sowden, A. J., Mather, L., Watt, I. S., & Walker, A. (2005). Systematic review of the effectiveness of health behavior interventions based on the transtheoretical model. Psychology & Health20(3), 283–301.

Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science, 6(1), 42.

West, R. (2005). Time for a change: putting the Transtheoretical (Stages of Change) Model to rest. Addiction100(8), 1036–9.

* The argument for completeness of information is based on the often flawed assumption that we know what is missing from an informational context and how it fits with the other data we have before us.

Photo taken at the Heidelberg Project in Detroit, MI by Cameron Norman. Original art by Tyree Guyton.

complexitydesign thinkingeducation & learningsystems thinking

Integrative Thinking And Empathy in Systems

Seeing What You're Reaching For And With

Seeing What You’re Reaching For And With

Award-winning Canadian author and University of Toronto professor David Gilmour came under social/media fire for comments made about his stance of only including male, middle-aged writers in his list of readings for his undergraduate English courses because that is the experience he resonates with most. Drawing on what you know is both wise and foolish when looking at it from the perspective of systems change and by looking within and beyond our own boundaries we can see how. 

Richard Katz knows what it is like to be an outsider and see the world from deep within and far from outside a culture. Katz, a former professor and elder with the First Nations University of Canada and Harvard-trained anthropologist, was one of the first non-native individuals to be welcomed into the lives of the Kalahari Ju|’hoansi peoples of central Africa. The Ju|’hoansi are known to Westerners as ‘the Bushmen‘ and were the ‘stars’ of the film The Gods Must Be Crazy. His journey and decades-long experience with these peoples are chronicled in two remarkable books on healing and culture.

Dr Katz worked closely with my undergraduate advisor and mentor, Dr. Mary Hampton, a remarkable community psychologist and her husband (and elder) Dr. Eber Hampton, and would occasionally come to meet and speak with us eager students and the healing communities in Regina, where I studied. In life, but particularly in working with affairs of the heart and soul (which is the stuff of healing and community), Katz would say:

Talk only of what you know

I didn’t fully understand the meaning of this when I heard it until much later in life. As one interested in the science as well as the art of healing I struggled to understand how we couldn’t speak of things unknown if we were seeking discovery — which is about making the unknown, known. Over time I came to ‘know’ more about what Katz meant:  that our perspective is one of many in a system and it is one, that if contemplated and welcomed with an open mind and heart, is valid and true while also being apart and unique. While we hold a stance those around you have their own perspective and stance that is both the same and different and in this lies the heart of healing.

Katz was trying to warn students and other researchers against the idea that we can just go into some place and ‘know it’ without being in it and that even in immersing ourselves in the worlds of others we are still but a traveller, just as they are in ours. He also suggested that we can’t know other systems without knowing our own (my words, not his).

It is the paradox that we can connect on a fundamental human level and still hold an independent, personal account. Being at one and apart at the same time. This is a hallmark feature of a complex system. It is also what makes integrative thinking and empathy so critical in such systems.

Knowing me, knowing you

This brings us back to professor Gilmour. Speaking to the online culture magazine Hazlitt, David Gilmour said that he doesn’t teach books written by women, just men. This has caused a predictable uproar in the social/ media (see Storify link below).

In an interview with the Toronto Star, Gilmour tried to clarify his comments:

“My only point is that I tend to teach people whose lives are close to my own,” said Gilmour, who has taught at the university for seven years. “I’m an old guy and I understand about old guys.”

On the surface, Gilmour is doing just what Dick Katz implored us all to do: speak of what you know. Gilmour knows ‘old guys’ (who are White and straight) and not women or other ‘groups’. He is being authentic and true to his experience.

What Gilmour is missing on this topic is the empathy that is so important in working with complexity. Teaching English to undergraduates might not be an obvious example of systems thinking and complexity, but it can be. As Gilmour points out, English is about a point of view, which is another way to say its about where you stand. The writing of the ‘old guys’ Gilmour includes in his courses are able telling a narrative from a point of view. That makes for good literature.

Yet, it is the reader’s ability to adopt, interpret, experience and critique the point of view of a story character that makes a literary work compelling. That is in large part about empathy. Great writers make empathy easy. By being empathic, we see a setting or context — a system — that might be unfamiliar to us in ways that seem familiar by bringing us momentarily into the world of the other. This familiarity allows us to draw on the experience we have in other settings and contexts and apply them to the new one.

To the degree this has harmony and congruence with the narrative being told is the measure of fit between data from one context to another.

This is what we do in systems work. For Gilmour, the complexity in his system comes not from his perspective, but that of his students. They are women, maybe GLBT, most certainly from other age and cultural groups and geographic contexts. Gilmour is asking his students to empathize with his ‘old guy’ narrative while forgetting that he can empathize with the narrative of someone who is Asian, queer, or speaks Catalan in drawing narratives that can be welcomed into the classroom without it being the perspective he’s most familiar with. Indeed, it is when we extend ourselves beyond the most familiar narratives to finding something resonant in other narratives that we learn, discover and innovate.

Integrative (Design) Thinking

Integrative thinking is a concept that Roger Martin, also from the University of Toronto, has made popular and integrated into the teaching at the Rotman School of Business. (Indeed, Rotman’s marketing material brands itself as providing “a new way to think”). This style of thinking, which Martin has written about extensively through his research on CEO decision making, has been closely linked with design thinking, which is also tied closely to thinking about systems. It is about holding different ideas together at the same time and building models of reality through the exploration of these opposable thoughts.

It is a vehicle for empathy to flow through connecting feelings and observations with thoughts and prototyping actions. This is ultimately what we do when we design for engagement in complex systems. We aim to place ourselves in the system we seek to influence, learn where we are in relation to the boundaries we see, set those boundaries (maintaining flexibility throughout) and then build mechanisms to get feedback and probe the culture we are a part of — organizationally, individually and so on — to enable us to take some action. This continues in an iterative manner throughout our engagement with the system.

Integrative thinking combined with empathy allows us to engage human systems we don’t fully know in a meaningful way that recognizes our limits — speaking to Katz’s point about ‘talking about what we know’ — while opening up possibilities for communion on issues of shared concern.

This means that we can know others, but also that we can only know them as ourselves. It also means that the systems change we seek in our social world is both an intensely personal journey and one that shares our common humanity, regardless of whether we are looking at shifting an organization, a community or a global culture.

Perhaps by taking a bigger view, professor Gilmour might find the same passion in literature that is from a different perspective and ultimately find how its also very much the same.