Category: evaluation

complexityemergenceevaluationinnovationsocial systems

The Mindful Socially Innovative Organization

Mindful Eye on the Organization

In complex systems there is a lot to pay attention to. Mindfulness and contemplative inquiry built into the organization can be a way to deal with complexity and help detect the weak signals that will make it thrive and be resilient in the face of challenges.

Most human-centred social ventures spend much of their time in the domain of complexity. What makes these complex is not the human part, but the social. As we interact with our myriad beliefs, attitudes, bases of knowledge, and perceptions we lay the foundation for complexity and the emergent properties than come from it. It’s why we are interesting as a species and why social organizing is such a challenge, particularly when we encourage free-flowing ideas and self-determination. Because of this complexity, we get exposed to a lot of information that gets poorly filtered or synthesized or missed altogether. Yet, it is in this flotsam and jetsam of information that keys to future problems and potential ‘solutions’ to present issues might lie. This is the power of weak signals. But how to we pay attention to these? And what does it matter?

The Strength of Weak Signals

A human social organization, which could mean a firm, a network, or a community — any collection of people that is organized by itself or other means — most likely generates complexity, sometimes often and sometimes occasionally. If we consider the Cynefin Framework, the domain of complexity is where emergent, novel practice is the dominant means of acting. In order to practice effectively within this space, one probes the environment, engages in sensemaking based on that information, and then responds appropriately. Viewed from another perspective, this could easily be used to describe mindfulness practice.

Mindfulness is both a psychological state and activity and a psychospiritual practice. I am using this in the psychological sense, even if one could apply the psychospiritual lens at the same time if they wished. Bishop and colleagues (2004) proposed a two-component definition of mindfulness:

The first component involves the self-regulation of attention so that it is maintained on immediate experience, thereby allowing for increased recognition of mental events in the present moment. The second component involves adopting a particular orientation toward one’s experiences in the present moment, an orientation that is characterized by curiosity, openness, and acceptance (p.232)

Weak signals are activities that when observed across conditions reveal patterns that provide beneficial (useful) coherence that has meaningful potential impact on events of significance, yet yield little useful information when observed in discrete events. In other words, these are little things that get spotted in different settings, contexts and times that when linked together produce a pattern that could have meaningful consequences in different futures. By themselves, such signals are relatively benign, but together they reveal something potentially larger.

One reason weak signals get missed is the premature labelling of information as ‘good’ and the constrained definition of what is ‘useful’ based on the current context. Mindfulness practice allows you to transcend the values and judgements imposed on data or information presented in front of you to see it more objectively.

Mindfulness involves quieting the mind and focusing on the present moment, not the past or the possible implications for the future, just the here and now. It is not ahistorical, however. Our past experience, knowledge and wisdom all come to bear on the mindful experience, yet they do not guide that experience.

Experience provides a frame of reference to consider new information, not judge it or apply value to it. It is what allows you to see patterns and derive meaning and sense from what is out there.

Building Mindful Organizations

A review of the research and scholarship on mindfulness finds a nearly exclusive focus on the individual. While there is much literature on the means of using mindfulness and contemplative inquiry as means of being active in the world, this is done largely through mechanisms of individuals coming together as groups, rather than the organizations they form as the focus of analysis.

There is an exception. Social psychologists Weick and Sutcliffe (2007, summarized here and here – PDF) wrote about resiliency in the face of uncertainty using a mindfulness lens to understand how organizations make better sense of what they do and experience in their operations. In their manuscript, Organizing for High Reliability: Processes of Collective Mindfulness (PDF), they lay down a theory for the mindful organization and how it increases the reliability of sensemaking processes when applied to complex informational environments.

They describe the conditions that precipitate mindfulness in organizations this way (p.38):

A state of mindfulness appears to be created by at least five processes that we have induced from accounts of effective practice in HROs (High Reliability Organizations) and from accident investigations:
1. Preoccupation with failure
2. Reluctance to simplify interpretations
3. Sensitivity to operations
4. Commitment to resilience
5. Underspecification of structures

It is notable that the aim here is not to reduce complexity (or impose simplicity), nor is it to focus on ‘positivity’, rather it is focused on events that help contribute to moving in particular direction. In that regard, this is not neutral, but it is not active either. It enables organizations to see patterns, focus on structures and information that encourages resilience to change, and contemplates what that information means (sensemaking) in context. Doing so provides useful information for decision making and taking action, but doesn’t frame information in those terms a priori.

Seeing Beyond Events

At issue is the development of consciousness of what is going on within your organization moment-to-moment, rather than punctuated by events. Events are the emergent properties of underlying patterns of activity. When we spend time attending to events without understanding the conditions that led to those events, we are doing the equivalent of changing the dressing on a wound in the absence of preventing or understanding its cause.

A mindful organization, like the image of the Buddha above, can emphasize the eye, but not at the expense of the rest of the picture. It is attuned to both simultaneously, noting events (e.g., like the square highlighted eye above), but that it is only through the underlying pattern beneath it that the highlighted context makes sense (the rest of the pictured squares). Yet, the only way the organization can learn that the yellow square is different or to ascertain its meaningful significance is through a sense of the whole, not just the part and that is social.

The Curious Organization

Mindfulness and its wider-focused counterpart Contemplative Inquiry both have a root in attending to the present moment, but also in curiosity about the things that is brought to the mind’s attention. It’s not just about seeing, but inquiring. What makes it distinct is that it does not impose judgement on what is perceived not seeking to change it while in that state of mindful awareness. This judgement and imposition of value on to what is going on is where organizations can get trapped.

In complex systems, the meaning of information may change rapidly and is likely uncertainty. The wisdom of experience, shared among others contemplating the same information without judgement, allows for a sensemaking process to unfold that does not impose limitations, yet also keeps a focus on what is going on moment-to-moment. Gathering this data, moment-to-moment, is what developmental evaluation with its emphasis on real-time data collection seeks to do and can serve as a valuable tool for organizing data to allow for a mindful contemplative inquiry into it that will illuminate weak signals.

Creating an organizational culture where open sharing, questioning, experimentation, and attention to the adjacent possibles that come from the data and experiences from operations is the foundation for a mindful organization. This means slowing down, valuing non-doing instead of the constant push to action, cultivating contemplative inquiry and reflection, while also being clear about the directions that matter. Thus, strategy in this case is not divorced from mindfulness, rather it gently frames a directionality of effort. In doing so, it creates possibilities for innovation, attention to quality, and a mechanism for building resiliency within organizations and those working with them and within them.

In creating these mindful systems we move closer to making sense of complexity and better prepare ourselves for social innovation.

Image Saddha by gnosis1211 from Deviant Art used under Creative Commons Licence

complexityevaluationsocial systemssystems science

The Forward Orientation Problem With Complexity

The human body is oriented towards forward motion and so too are our institutions, yet while this helps us move linearly and efficiently from place to place, it may obscure opportunities and challenges that come from other directions such as those posed by complexity. Thinking about and re-orienting our perceptions of who we are and where we are going might be the key to understanding and dealing with complexity now and in the future.

When heading out into the turbulent waters that face us we humans tend to look straight ahead and press forward. Our entire physical being and that of all mammals is aimed at facing forward. We look forward, walk forward and this often means thinking forward.

Doing this predisposes us to seeing problems ahead of us or behind us, but is less useful when what challenges us is positioned elsewhere. For this reason, fish and birds, with their eyes on the side of their head, are able to adapt to challenges from nearly any direction quickly. It also allows them to fly/swim in flocks/swarms/schools and operate with high degrees of coordination on a large scale.

These are skills that are useful for handling the social problems that are complex in nature and require mass action to address. But, we don’t have eyes on the side of our head and we tend to look forward or backward to orient ourselves and our activities.

One way this expresses itself in our perceptions of time. Thor Muller, writing in Psychology Today online, highlighted how our perceptions of time influence the way we handle appointments and punctuality with modern technology.  Citing the work of anthropologist Edward T. Hall (although mistakenly referring to Manhattan Project contributor Edward Teller), Muller points to the differences in perceived time across cultures and the way that plays out in our treatment of time and technology used to “manage” it and the complexity of everyday life. Monochronistic and polychronistic time orientations matter to whether you see time as a linear, quantifiable phenomenon or a more non-linear, contextual one. One allows you to “bank” time while the other perception deals more with the present moment, less dependent on forward-backward thinking.

Western society and the technologies developed within it are oriented primarily towards dealing with a monochronistic form of time. This works well when patterns, problems and situations have a linear, ordered set of circumstances to them. The cause-and-effect world of normal science fits within this worldview.

Complexity is non-linear and not easily defined in cause-and-effect terms and conditions. Two-dimensional space doesn’t capture complexity the way it can for linear situations. It also means thinking solely in forward and back terms is problematic.

An example of where this comes to conflict is in program planning and evaluation. Traditional evaluation methods and metrics are set up for looking at programs that are planned to start and end with impacts developed and detected in between. This implies a certain level of consistency in the conditions in which that program operates. This control and measure aspect of evaluation is part of the hallmark features of scientific inquiry.

For programs operating in environments of great change and flux, this is a faulty proposition. We cannot hold constant the environment for starters. Secondly, feedback gained from learning about the program as it proceeds is critical to ensuring adaptation and promoting resilience in the face of changing conditions. In these cases, failure to act and adapt on the go may result in a program failing catastrophically.

This is where developmental evaluation comes in. Developmental evaluation works with these conditions to generate data in a manner that programs can make sense of and use to facilitate strategic adaptation rather than simply reacting to changes. As the name suggests, it promotes development rather than improvement.Developmental design is the incorporation of this feedback into an ongoing program development and design process.

Both developmental design and evaluation require ways of seeing the world beyond forward/backward. This seeing comes from understanding where one’s position is in the first place and that requires methods of centring that take us into the world of polychronistic time. One example of a strategy that suits this approach is mindfulness programming. Mindfulness-based programs have shown remarkable efficacy in healing and health interventions aimed at stress reduction across conditions. Mindfulness techniques ranging from meditation to contemplative inquiry (video) brings focus to the present moment away from an orientation towards linear trajectories of time, thought and attention.

Some forms of martial arts promote attentive awareness to the present moment by training practitioners in strategies that are focused on simple rules of engagement, rather than just learning techniques for defence.

These approaches combine inward reflection — reflective practice — with an openness to the data that comes in around them without imposing an order on it a priori. The orientation is to the data and the lessons that come from it rather than its directionality or imposing values on what the data might mean at the start. It means slowing down, contemplating things, and acting on reflection not reacting based on protocol. This is a fundamental shift for many of our activities, but may be the most necessary thing we can focus on if we are to have any hope of understanding, dealing with, and adapting to complexity.

All the methods and tools at our disposal will not help if we cannot change our mindset and orientation — even in the temporary — to this reality when looking at complexity in our work. One of complexity’s biggest challenges right now is that it is seductive in accounting for the massive, dynamic sets of conditions we face every day, yet it lacks methods beyond evaluation to do things with it. The irony of mindfulness and contemplative approaches is that they are less about acting differently and more about seeing things in new ways, yet it is that orientation that is the key to making real change from talking about change. It is the design doing that comes with design thinking and the systems change from systems thinking.

complexitydesign thinkingemergenceevaluationinnovation

Evaluation and Design For Changing Conditions

Growth and Development

The days of creating programs, products and services and setting them loose on the world are coming to a close posing challenges to the models we use for designing and evaluation. Adding the term ‘developmental’ to both of these concepts with an accompanying shift in mindset can provide options moving forward in these times of great complexity.

We’re at the tail end of a revolution in product and service design that has generated some remarkable benefits for society (and its share of problems), creating the very objects that often define our work (e.g., computers). However, we are in an age of interconnectedness and ever-expanding complexity. Our disciplinary structures are modifying themselves, “wicked problems” are less rare

Developmental Thinking

At the root of the problem is the concept of developmental thought. A critical mistake made in comparative analysis — whether through data or rhetoric — is one that mistakenly views static things to moving things through the same lens. Take for example a tree and a table. Both are made of wood (maybe the same type of wood), yet their developmental trajectories are enormously different.

Wood > Tree

Wood > Table

Tables are relatively static. They may get scratched, painted, re-finished, or modified slightly, but their inherent form, structure and content is likely to remain constant over time. The tree is also made of wood, but will grow larger, may lose branches and gain others; it will interact with the environment providing homes for animals, hiding spaces or swings for small children; bear fruit (or pollen); change leaves; grow around things, yet also maintain some structural integrity that would allow a person to come back after 10 years and recognize that the tree looks similar.

It changes and it interacts with its environment. If it is a banyan tree or an oak, this interaction might take place very slowly, however if it is bamboo that same interaction might take place over a shorter time frame.

If you were to take the antique table shown above, take its measurements and record its qualities and  come back 20 years later, you will likely see an object that looks remarkably similar to the one you lefty. The time of initial observation was minimally relevant to the when the second observation was made. The manner by which the table was used will have some effect on these observations, but to a matter of degree the fundamental look and structure is likely to remain consistent.

However, if we were to do the same with the tree, things could look wildly different. If the tree was a sapling, coming back 20 years might find an object that is 2,3,4 times larger in size. If the tree was 120 years old, the differences might be minimal. It’s species, growing conditions and context matters a great deal.

Design for Development / Developmental Design

In social systems and particularly ones operating with great complexity, models of creating programs, policies and products that simply release into the world like a table are becoming anachronistic. Tables work for simple tasks and sometimes complicated ones, but not complex ones (at least, consistently). It is in those areas that we need to consider the tree as a more appropriate model. However, in human systems these “trees” are designed — we create the social world, the policies, the programs and the products, thus design thinking is relevant and appropriate for those seeking to influence our world.

Yet, we need to go even further. Designing tables means creating a product and setting it loose. Designing for trees means constantly adapting and changing along the way. It is what I call developmental design. Tim Brown, the CEO of IDEO and one of the leading proponents of design thinking, has started to consider the role of design and complexity as well. Writing in the current issue of Rotman Magazine, Brown argues that designers should consider adapting their practice towards complexity. He poses six challenges:

  1. We should give up on the idea of designing objects and think instead about designing behaviours;
  2. We need to think more about how information flows;
  3. We must recognize that faster evolution is based on faster iteration;
  4. We must embrace selective emergence;
  5. We need to focus on fitness;
  6. We must accept the fact that design is never done.
That last point is what I argue is the critical feature of developmental design. To draw on another analogy, it is about tending gardens rather than building tables.

Developmental Evaluation

Brown also mentions information flows and emergence. Complex adaptive systems are the way they are because of the diversity and interaction of information. They are dynamic and evolving and thrive on feedback. Feedback can be random or structured and it is the opportunity and challenge of evaluators to provide the means of collecting and organizing this feedback to channel it to support strategic learning about the benefits, challenges, and unexpected consequences of our designs. Developmental evaluation is a method by which we do this.
Developmental evaluators work with their program teams to advise, co-create, and sense-make around the data generated from program activities. Ideally, a developmental evaluator is engaged with program implementation teams throughout the process. This is a different form of evaluation that builds on Michael Quinn Patton’s Utilization Focused-Evaluation (PDF) methods and can incorporate much of the work of action research and participatory evaluation and research models as well depending on the circumstance.

Bringing Design and Evaluation Together

To design developmentally and with complexity in mind, we need feedback systems in place. This is where developmental design and evaluation come together. If you are working in social innovation, your attention to changing conditions, adaptation, building resilience and (most likely) the need to show impact is familiar to you. Developmental design + developmental evaluation, which I argue are two sides of the same coin, are ways to conceive of the creation, implementation, evaluation, adaptation and evolution of initiatives working in complex environments.
This is not without challenge. Designers are not trained much in evaluation. Few evaluators have experience in design. Both areas are familiarizing themselves with complexity, but the level and depth of the knowledge base is still shallow (but growing). Efforts like those put forth by Social Innovation Generation initiative and the Tamarack Institute for Community Engagement in Canada are good examples of places to start. Books like Getting to Maybe,  M.Q. Patton’s Developmental Evaluation, and Tim Brown’s Change by Design are also primers for moving along.
However, these are start points and if we are serious about addressing the social, political, health and environmental challenges posed to us in this age of global complexity we need to launch from these start points into something more sophisticated that brings these areas further together. The cross training of designers and evaluators and innovators of all stripes is a next step. So, too, is building the scholarship and research base for this emergent field of inquiry and practice. Better theories, evidence and examples will make it easier for all of us to lift the many boats needed to traverse these seas.
It is my hope to contribute to some of that further movement and welcome your thoughts on ways to build developmental thinking in social innovation and social and health service work

Image (Header) Growth by Rougeux

Image (Tree) Arbre en fleur by zigazou76

Image (Table) Table à ouvrage art nouveau (Musée des Beaux-Arts de Lyon) by dalbera

All used under licence.

complexitydesign thinkingevaluation

The PR Problem for Design, Evaluation,and Complexity

I (heart) PR

Complex concepts like evaluation, design and even complexity itself provide insight, strategies and applications that provide usable solutions to real-world problems, but also suffer from widespread misunderstandings, confusion and even derision. If they are to take hold beyond their initial communities of interest, they need to address their PR problem head on. 

This past week was Design Week in Toronto. As one works extensively with design concepts and even has a health promotion-focused design studio, one couldn’t be faulted for thinking that this would be a big week for someone like me who lives and works in the city.  Well, it came and went and I didn’t attend a single thing. The reason was partly due to timing and my schedule, but largely because the focus of the week was not really on design writ large, but rather interior design. Sure, there were a few events that focused on social issues (what I am interested in) like the Design With Dialogue session on Designing a Future for our Future, but mostly it was focused on one area of a large field.

And thus, interior design was left to represent all of design.

So why does this matter? It matters a lot because when people hear the term design, most of what was presented this week fits with that perception. The problem is that design is so much more than that. It is about making things, creative thinking and problem tackling (design thinking), social innovation, and responsive planning for complex situations. Architects, business leaders, military strategists, social service agencies and health promoters all engage in design. Indeed, Herbert Simon‘s oft-quoted and often contested definition fits nicely here:

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones

If one accepts that we are all designers and all of what we create and use for change is design, than a week devoted to the topic should offer much more than innovative concepts in furniture or flooring. Yet, this high-concept style showcase is what most people think about when they first hear design. Give people a choice between a Philippe Stark Juicy Salif citrus juicer and creating a trades-based, social change curriculum for low-income kids such as the work by Emily Pilloton as the example of design and they will probably guess think Stark over Pilloton, when both are equally valid examples.

Evaluation (another area I focus my work on) is equally fraught with perception problems. If you want to raise someone’s blood pressure or heart rate, tell them that either they or their work will be the focus of an evaluation. Evaluation may be the longest four-letter word in the English language. Yet, tell someone that you have a strategy that can enable people to learn about what they do, its impact, and provide intelligence on ways to improve, adapt and outperform their competitors and you’ll find an inspired audience for evaluation services.

Lastly, complexity presents the same challenge. It’s very name — complexity — can make people shy away from it. As humans, we crave the simple in most things as it is easier to understand, manage and control. Complexity offers none of these things and, if anything, reveals how little control we have. Entire fields of inquiry have been established around complexity science and its related theories and practices. Complexity can help us make sense of why things don’t work as we think they ought to and allow us to better navigate through unpredictable terrain with greater resilience than if we tried to tackle such problems as if they were linear in their cause and consequence.

In all of these cases — design, evaluation and complexity — there exists a PR problem. The advantages that they pose are tremendous, yet these concepts are frequently misunderstood, dismissed, or inappropriately used . When this happens, it creates even greater distance between the potential benefits these concepts offer and their real-world application.

This distance is partly an artefact of poorly articulated definitions and examples, but also by design (no pun intended). There are those who relish having these concepts appear opaque to those outside of their social cluster. Thus, we have the ‘superstar designer’ who seeks to create products and personas that are built upon their rarity, rather than accessibility. There are evaluators who exploit the fear that people have of evaluation and lack the understanding of the methods and practices of evaluation (vs concepts like research or innovation consulting) to gain contracts and social influence within their field. Complexity, with its foundations in physics and systems biology, can appear to the layperson as otherworldly, making its practitioners and scientists seem all the more powerful and smart. These tactics benefit a small ‘elite'(?) number of professionals, while robbing a far larger audience of the potential benefits.

In 1969, then president of the American Psychological Association, George Miller, implored members to “give psychology away“. His message was that psychology was too important to be left just to the professional, graduate-trained practitioners to use. If psychology was to confer social benefits, it was necessary to ensure that everyone had access to it — it’s theories, methods, models and treatments. It is perhaps no surprise that psychology remains one of the most popular undergraduate degree programs in the arts and social sciences and the focus of television shows, magazines and and an array of services. Miller was commenting on the need to change a field that he perceived was becoming elitist and not serving the needs of society.

The same might be true of design, evaluation and complexity if we let it. It’s not a surprise that these three concepts are intimately tied together, as those training to apply design thinking and strategic foresight learn. Perhaps its time to start giving these ideas away, but to do so we first need to rehab their image and apply some design thinking and brand development strategy to all three ideas. As practitioners in any or all of these fields, giving away what we do by educating, reinforcing, and ensuring that the work we do is of the highest quality is a way to lead by example. None of us is likely to change things by ourselves, but together we can do wonders.

For those interested in evaluation, I suggest catching up on the AEA365 blog sponsored by the American Evaluation Association, where evaluation bloggers and practitioners share ideas about how to practice evaluation, but also how to communicate it to others. For those interested in design, I would encourage you to look at places like the Design Thinkers LinkedIn group, where practitioners from around the world discuss innovations and way to promote and apply design thinking. A similar group, and opportunity, exists with the Systems Thinking LinkedIn group or by joining the Plexus Institute, which does considerable work to promote complexity and systems thinking in North America.

Photo: I (Heart) PR by The Silfwer used under Creative Commons Licence from Flickr.

complexitydesign thinkingevaluationsystems sciencesystems thinking

The Complexity of Planning and Design in Social Innovation

The Architecture of Complex Plans

Planning works well for linear systems, but often runs into difficulty when we encounter complexity. How do we make use of plans without putting too much faith in their anticipated outcome and still design for change and can developmental design and developmental evaluation be a solution? 

It’s that time of year when most people are starting to feel the first pushback to their New Year’s Resolutions. That strict budget, the workout plan, the make-time-for-old-friends commitments are most likely encountering their first test. Part of the reasons is that most of us plan for linear activities, yet in reality most of these activities are complex and non-linear.

A couple interesting quotes about planning for complex environments:

No battle plan survives contact with the enemy – Colin Powell

In preparing for battle I have always found that plans are useless, but planning is indispensable – Dwight D. Eisenhower

Combat might be the quintessential complex system and both Gens Powell and Eisenhower knew about how to plan for it and what kind of limits planning had, yet it didn’t dissuade them from planning, acting and reacting. In war, the end result is what matters not whether the plan for battle went as outlined (although the costs and actions taken are not without scrutiny or concern). In human services, there is a disproportionate amount of concern about ‘getting it right’ and holding ourselves to account for how we got to our destination relative what happens at the destination itself.

Planning presents myriad challenges for those dealing with complex environments. Most of us, when we plan, expect things to go according to what we’ve set up. We develop programs to fit with this plan, set up evaluation models to assess the impact of this plan, and envisage entire strategies to support the delivery and full realization of this plan into action. For those working in social innovation, what is often realized falls short of what was outlined, which inevitably causes problems with funders and sponsors who expect a certain outcome.

Part of the problem is the mindset that shapes the planning process in the first place. Planning is designed largely around the cognitive rational approach to decision making (PDF), which is based on reductionist science and philosophy. Like the image above, a plan is often seen as a blueprint for laying out how a program or service is to unfold over time. Such models of outlining a strategy is quite suitable for building a physical structure like an office where everything from the materials to the machines used to put them together can be counted, measured and bound. This is much less relevant for services that involve interactions between autonomous agents who’s actions have influence on the outcome of that service and that result might vary from context to context as a consequence.

For evaluators, this is problematic because it reduces the control (and increases variance and ‘noise’) into models that are designed to reveal specific outcomes using particular tools. For program implementers, it is troublesome because rigid planning can drive actions away from where people are and for them into activities that might not be contextually appropriate due to some change in the system.

For this reason the twin concepts of developmental evaluation and developmental design require some attention. Developmental evaluation is a complexity-oriented approach to feedback generation and strategic learning that is intended for programs where there is a high degree of novelty and innovation. Programs where the evidence is low or non-existent, the context is shifting, and there are numerable strong and diverse influences are those where developmental evaluations are not only appropriate, but perhaps one of the only viable models of data collection and monitoring available.

Developmental design is a concept I’ve been working on as a reference to the need to incorporate ongoing design and re-design into programs even after they have been initially launched. Thus, a program evolves over time drawing in information from feedback gained through processes like evaluation to tweak its components to meet changing circumstances and needs. Rather than have a static program, a developmental design is one that systematically incorporates design thinking into the evolutionary fabric of the activities and decision making involved.

Both developmental design and evaluation work together to provide data required to allow program planners to constantly adapt their offerings to meet changing conditions, thus avoiding the problem of having outcomes becoming decoupled from program activities and working with complexity rather than against it. For example, developmental evaluation can determine what are the key attractors shaping program activities while developmental design can work with those attractors to amplify them or dampen them depending on the level of beneficial coherence they offer a program. In two joined processes we can acknowledge complexity while creating more realistic and responsive plans.

Such approaches to design and evaluation are not without contention to traditional practitioners, leaving questions about the integrity of the finished product (for design) and the robustness of the evaluation methods, but without alternative models that take complexity into account, we are simply left with bad planning instead of making it like Eisenhower wanted it to be: indispensable .

evaluationknowledge translation

A Call to Evaluation Bloggers: Building A Better KT System

Time To Get Online...

Are you an evaluator and do you blog? If so, the American Evaluation Association wants to hear from you. This CENSEMaking post features an appeal to those who evaluate, blog and want to share their tips and tricks for helping create a better, stronger KT system. 

Build a better moustrap and the world will beat a path to your door — Attributed to Ralph Waldo Emerson

Knowledge translation in 2011 is a lot different than it was before we had social media, the Internet and direct-to-consumer publishing tools. We now have the opportunity to communicate directly to an audience and share our insights in ways that go beyond just technical reports and peer-reviewed publications, but closer to sharing our tacit knowledge. Blogs have become a powerful medium for doing this.

I’ve been blogging for a couple of years and quite enjoy it. As an evaluator, designer, researcher and health promoter I find it allows me to take different ideas and explore them in ways that more established media do not. I don’t need to have the idea perfect, or fully formed, or relevant to a narrow audience. I don’t need to worry about what my peers think or my editor, because I serve as the peer review, editor and publisher all at the same time.

I originally started blogging to share ideas with students and colleagues — just small things about the strange blend of topics I engage in that many don’t know about or understand or wanted to know more of. Concepts like complexity, design thinking, developmental evaluation, and health promotion can get kind of fuzzy or opaque for those outside of those various fields.

Blogs enable us to reach directly to an audience and provide a means of adaptive feedback on ideas that are novel. Using the comments, visit statistics, and direct messages sent to me from readers, I can gain some sense of what ideas are being taken up with people and which one’s resonate. That enables me to tailor my messages and amplify those parts that are of greater utility to a reader, thus increasing the likelihood that a message will be taken up. For CENSEMaking, the purpose is more self-motivated writing rather than trying to assess the “best” messages for the audience, however I have a series of other blogs that I use for projects as a KT tool. These are, in many cases, secured and by invitation only to the project team and stakeholders, but still look and feel like any normal blog.

WordPress (this site) and Posterous are my favorite blogging platforms.

As a KT tool, blogs are becoming more widely used. Sites like Research Blogging are large aggregations of blogs on research topics. Others, like this one, are designed for certain audiences and topics — even KT itself, like the KTExchange from the Research Into Action Action initiative at the University of Texas and MobilizeThis! from the Research Impact Knowledge Mobilization group at York University.

The American Evaluation Association has an interesting blog initiative led by AEA’s Executive Director Susan Kistler called AEA365, which is a tip-a-day blog for evaluators looking to learn more about who and what is happening in their field. A couple of years ago I contributed a post on using information technology and evaluation and was delighted at the response it received. So it reaches people. It’s for this reason that AEA is calling out to evaluation bloggers to contribute to the AEA365 blog with recommendations and examples for how blogging can be used for communications and KT. AEA365 aims to create small-bite pockets of information that are easily digestible by its audience.

If you are interested in contributing, the template for the blog is below, with my upcoming contribution to the AEA365 blog posted below that.

By embracing social media and the power to share ideas directly (and done so responsibly), we have a chance to come closer to realizing the KT dream of putting more effective, useful knowledge into the hands of those that can use it faster and engage those who are most interested and able to use that information more efficiently and humanely.

Interested in submitting a post to the AEA365 blog? Contact the AEA365 curators at

Template for aea365 Blogger Posts (see below for an example)

[Introduce yourself by name, where you work, and the name of your blog]

Rad Resource – [your blog name here]: [describe your blog, explain its focus including the extent to which it is related to evaluation, and tell about how often new content is posted]

Hot Tips – favorite posts: [identify 3-5 posts that you believe highlighting your blogging, giving a direct link and a bit of detail for each (see example)]

  • [post 1]
  • [post 2]
  • Etc.

Lessons Learned – why I blog: [explain why you blog – what you find useful about it and the purpose for your blog and blogging. In particular, are you trying to inform stakeholders or clients? Get new clients? Provide a public service? Help students?]

Lessons Learned: [share at least one thing you have learned about blogging since you started]

Remember – stay under 450 words total please!

My potential contribution (with a title I just made up): Cameron Norman on Making Sense of Complexity, Design, Systems and Evaluation: CENSEMaking

Rad Resource – [CENSEMaking]: CENSEMaking is a play on the name of my research and design studio consultancy and on the concept of sensemaking, something evaluators help with all the time. CENSEMaking focuses on the interplay of systems and design thinking, health promotion and evaluation and weaves together ideas I find in current social issues, reflections on my practice as well as the evidence used to inform it. I aspire to post on CENSEMaking 2-3 times per week, although because it is done in a short-essay format, find the time can be a challenge.

Hot Tips – favorite posts:

  • What is Developmental Evaluation? This post came from a meeting of working group with Michael Quinn Patton and was fun to write because the original exercise that led to the content (described in the post) was so fun to do. It also provided an answer to a question I get asked all the time.
  • Visualizing Evaluation and Feedback. I believe that the better we can visualize complexity the more feedback we provide, the greater the opportunities we have for engaging others, and more evaluations will be utilized. This post was designed to provoke thinking about visualization and illustrate how its been creatively used to present complex data in interesting and accessible ways. My colleague and CENSE partner Andrea Yip has tried to do this with a visually oriented blog on health promoting design, which provides some other creative examples of ways to make ideas more appealing and data feel simpler.
  • Developmental Design and Human Services. Creating this post has sparked an entire line of inquiry for me on bridging DE and design that has since become a major focus for my work. This post became the first step in a larger journey.

Lessons Learned – why I blog: CENSEMaking originally served as an informal means of sharing my practice reflections with students and colleagues, but has since grown to serve as a tool for knowledge translation to a broader professional and lay audience. I aim to bridge the sometimes foggy world that things like evaluation inhabit  — particularly developmental evaluation – and the lived world of people whom evaluation serves.

Lessons Learned: Blogging is a fun way to explore your own thinking about evaluation and make friends along the way. I never expected to meet so many interesting people because they reached out after reading a blog post of mine or made a link to something I wrote. This has also led me to learn about so many other great bloggers, too. Give a little, get a lot in return and don’t try and make it perfect. Make it fun and authentic and that will do.


** Photo by digitalrob70 used under Creative Commons License from Flickr

complexityevaluationinnovationsystems thinking

What is Developmental Evaluation?

Developmental evaluation (DE) is a problematic concept because it deals with a complex set of conditions and potential outcomes that differ from and challenge the orthodoxy in much of mainstream research and evaluation and makes it difficult to communicate. At a recent gathering of DE practitioners in Toronto, we were charged with coming up with an elevator pitch to describe DE to someone who wasn’t familiar with it; this is what I came up with. 

Developmental evaluation is an approach to understanding the activities of a program operating in dynamic, novel environments with complex interactions. It  focuses on innovation and strategic learning rather than standard outcomes and is as much a way of thinking about programs-in-context and the feedback they produce. The concept is an extension of Michael Quinn Patton’s original concept of Utilization Focused Evaluation with concepts gleaned from complexity science to account for the dynamism and novelty. While Utilization Focused Evaluation has a series of steps to follow (PDF), Developmental Evaluation is less prescriptive, which is both its strength and its challenge for describing it to people (things I’ve discussed in earlier posts).

So with that in mind, our group was charged with coming up with a way to explain DE to someone who is not familiar with it using anything we’d like — song, poetry, dance, slides, stories and beyond. While my colleague Dan chose to lead us all in song, I opted to go with a simple analogy by comparing DE to a hybrid of Trip Advisor and the classic Road Trip (due to lack of good vocalizing skills).

Trip Advisor has emerged as one of the most popular tools for travellers seeking advice on everything from hotel rooms to airlines to resorts and all the destinations along the way. Trip Advisor is averaging more than 13 million unique visitors per month and, unlike its competitors, focuses on user-generated content to support its service. Thus, your fellow travellers are the source of the recommendations not some professional travel agent or journalist. At its heart are stories of varies tones, detail and quality. People upload various accounts of their stay, chronicling even the most minute detail through photos, links to their blogs, video, and narrative. If you want to get the inside details on what a hotel is really like, check Trip Advisor and you’ll likely find it.

However, like any self-organizing set of ideas, the quality of the content will vary along with the level of reportage and the conclusions will be different depending on the context and experience of the person doing the reporting. For example, if you are a North American who is used to having even the most basic hotel chain offer a room with full-service linens, a bathroom, closet, desk and separate shower, you’ll have a hard time adjusting to something like EasyHotel in Europe.

The Road Trip part (capitalization intended here to denote something different than a regular trip by road), denotes the experience that comes from a journey with a desired destination, but not a pre-determined route and only a generalized timeline. A Road Trip is something that is more than just traveling from Point A to Point B, which is usually accomplished by taking the shortest route, the fastest route or a combination of the two; rather it is a journey. Movies like National Lampoon’s Vacation (and, European Vacation), Thelma and Louise, Harold & Kumar Go to White Castle, and (surprise!) Road Trip all capture this spirit to some effect. I suppose one might even find a more grim example of a Road Trip in the Lord of the Rings Trilogy or The Road.

Road Trips have a long history and are not just a North American phenomenon as this article from the Indian Newspaper, The Hindu reports in some detail:

“Road trips are fun when they are not planned point-to-point. As long as you have accommodation booked, that is enough. Its better not to have agendas; get as spontaneous and adventurous as you can. My friends and I went on a road trip to Goa last year. It was loads of fun as it was the first time we took off on our own without parents. To me, it was more than just a trip with friends. It showed that I could take care of myself and that I was now a grown-up, free to do what I wanted,” says Siddharth, who is doing his engineering.

The idea of spontaneity and adventure are part of the process, not an unexpected problem to be solved like in a traditional evaluation. Indeed, some of these unplanned and unusual departures are not only part of the learning, but essential to it. It is akin to what Thor Muller describes as planned serendipity; you might not know what is going to come, but it is possible to set the conditions up to increase the likelihood of and preparedness for moments of discovery and learning. This is like setting out on a journey with a mindset of developmental and strategic learning to fit with what Louis Pasteur stated about discovery:
Chance favours the prepared mind
Thus, as Developmental Evaluators and program implementation leaders we are creating conditions to learn en route to a general destination, but without a clear path and an open mind towards what might unfold. This attention to the emergence of new patterns and then the sensemaking to understand what these new patterns mean in the context to which they emerged and the goals, directions and resources that surround the discovery is a important facet of what separates Developmental Evaluation from other forms of evaluation and research.
So in describing DE to others, I proposed combining these two ideas of Trip Advisor and the Road Trip to create: Road Trip Advisor.

Road Trip Advisor for Developmental Evaluation

Road Trip Advisor would involve going on a journey that has a general destination, but with no single path to it. Along the way, the Developmental Evaluator would work with those taking the journey with him — likely the program staff, stakeholders and others interested in strategic learning and feedback — and systematically capture the decision points to take a particular path, the process that unfolded in making decisions, the outcomes or events connected to those decisions inasmuch as one can draw such linkages, and then continually dialogue with the program team about what she or he or they are seeing, sensing and experiencing. This includes what innovations are being produced.
Returning to the article on road tripping from The Hindu:
“Road-tripping is a great way to bond with the people you are travelling with and I would strongly recommend it to people. It not only makes you appreciate yourself as an individual but is an amazing experience as you get to meet new people, know different cultures and sample different cuisines. I can never forget biking on sleet, riding though torrential rains, gobbling hot rotis at dhabas, the beautiful snow-capped mountains and guy talk with friends on the trip,” says Dheeraj, who recently went to Ladakh.
Here the focus is on relationships, learning new things and taking that learning onward. That is what DE is all about. My colleague Remi illustrated this in our meeting by having us all spread out throughout the room and go through a pantomime-type skit where he collected information from each participant about where the wisdom was and then bringing this person along for the journey. So as he started out alone as the Developmental Evaluator, he wound up at the destination of wisdom with everyone.
Road Trip Advisor requires documenting the journey along the way, sharing what you learn with others, and continuing learning and revisiting your notes — while checking out what notes others have (including use of evidence from other projects and academic research) — and integrating that together on an ongoing basis.
But as my other colleagues pointed out in their presentations, the journey isn’t always about feeling good. Sometimes there are challenges as the Hindu article adds:

all is not hunky dory during these trips. You have to be way about accidents and mishaps. And, realise that freedom comes with responsibility. Says Arjun: “I had borrowed my friend’s bike for the trip, and though it looked good, it gave problems on the foothills of Kodaikanal and we couldn’t do the climb. Being a weekend, there were no mechanics. It helps to know your machine. A passion for road-tripping is not enough. You need to be equipped to take care of yourself also.”

Here, the story parallel is about being prepared. Know evaluation methods, know how to build and sustain relationships and to deal with conflict. A high tolerance for ambiguity and the flexibility to adapt is also important. Knowing a little about systems thinking and complexity doesn’t hurt either. Developmental evaluation is not healthy for those who need a high degree of predictability, are not flexible in their approach, and adhere to rigid timelines. Complex systems collapse under rigid boundary conditions and do evaluators working with such restrictions in developmental contexts.

So why do people do it? “Well, my memories of my favourite road trip were an injured leg, chocolates, beautiful photographs and a great sense of fulfilment,” recalls Arjun.

It is youngsters like these who have transformed road-tripping from just a hobby to an art.

After all, friendship and travel is a potent combination that you can’t say no to.

 In DE, the “youngsters” are everyone. But as we (my DE colleagues) all pointed out: DE is fun. It is fun because we learn and grow and challenge ourselves and the programs that we are working with. It’s collaborative, instructive, and promotes a level of connection between people, programs and ideas that other methods of evaluation and learning are less effective at. DE is not for everyone or every program and Micheal Quinn Patton has pointed this out repeatedly. But for those programs where innovation, strategic learning and collaboration count, it is pretty good way to journey from where you are to where you want to go.