Tag: research methods

complexitydesign thinkingevaluationinnovationsystems thinking

Developmental Evaluation and Complexity

Stitch of Complexity

Stitch of Complexity

Developmental evaluation is an approach (much like design thinking) to program assessment and valuation in domains of high complexity, change, and innovation. These three terms are used often, but poorly understood in real terms for evaluators to make much use of. This first in a series looks at the term complexity and what it means in the context of developmental evaluation. 

Science writer and professor Neil Johnson  is quoted as saying: “even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples…” and that his definition of a science of complexity (PDF) is:  “the study of the phenomena which emerge from a collection of interacting objects.”  The title of his book Two’s Company, Three’s Complexity hints at what complexity can mean to anyone who’s tried to make plans with more than one other person.

The Oxford English Dictionary defines complexity as:

complexity |kəmˈpleksitē|

noun (pl. complexities)

the state or quality of being intricate or complicated: an issue of great complexity.

• (usu. complexities) a factor involved in a complicated process or situation: the complexities of family life.

For social programs, complexity involves multiple overlapping sources of input and outputs that interact with systems in dynamic ways at multiple time scales and organizational levels in ways that are highly context-dependent. Thats a mouthful.

Developmental evaluation is intended to be an approach that takes complexity into account, however that also means that evaluators and the program designers that they work with need to understand some basics about complexity. To that end, here are some key concepts to start that journey.

Key complexity concepts

Complexity science is a big and complicated domain within systems thinking that brings together elements of system dynamics, organizational behaviour, network science, information theory, and computational modeling (among others).  Although complexity has many facets, there are some key concepts that are of particular relevance to program designers and evaluators, which will be introduced with discussion on what they mean for evaluation.

Non-linearity: The most central start point for complexity is that it is about non-linearity. That means prediction and control is often not possible, perhaps harmful, or at least not useful as ideas for understanding programs operating in complex environments. Further complicating things is that within the overall non-linear environment there exist linear components. It doesn’t mean that evaluators can’t use any traditional means of understanding programs, instead it means that they need to consider what parts of the program are amenable to linear means of intervention and understanding within the complex milieu. This means surrendering the notion of ongoing improvement and embracing development as an idea. Michael Quinn Patton has written about this distinction very well in his terrific book on developmental evaluation. Development is about adaptation to produce advantageous effects for the existing conditions, improvement is about tweaking the same model to produce the same effects across conditions that are assumed to be stable.

Feedback: Complex systems are dynamic and that dynamism is created in part from feedback. Feedback is essentially information that comes from the systems’ history and present actions that shape the immediate and longer-term future actions. An action leads to an effect which is sensed, made sense of, which leads to possible adjustments that shape future actions. For evaluators, we need to know what feedback mechanisms are in place, how they might operate, and what (if any) sensemaking rubrics, methods and processes are used with this feedback to understand what role it has in shaping decisions and actions about a program. This is important because it helps track the non-linear connections between causes and effects allowing the evaluator to understand what might emerge from particular activities.

Emergence: What comes from feedback in a complex system are new patterns of behaviour and activity. Due to the ongoing, changing intensity, quantity and quality of information generated by the system variables, the feedback may look different each time an evaluator looks at it. What comes from this differential feedback can be new patterns of behaviour that are dependent on the variability in the information and this is called emergence. Evaluation designs need to be in place that enable the evaluator to see emergent patterns form, which means setting up data systems that have the appropriate sensitivity. This means knowing the programs, the environments they are operating in, and doing advanced ‘ground-work’ preparing for the evaluation by consulting program stakeholders, the literature and doing preliminary observational research. It requires evaluators to know — or at least have some idea — of what the differences are that make a difference. That means knowing first what patterns exist, detecting what changes in those patterns, and understanding if those changes are meaningful.

Adaptation: With these new patterns and sensemaking processes in place, programs will consciously or unconsciously adapt to the changes created through the system. If a program itself is operating in an environment where complexity is part of the social, demographic, or economic environment even a stable, consistently run program will require adaptation to simply stay in the same place because the environment is moving. This means sufficiently detailed record-keeping is needed — whether through program documents, reflective practice notes, meeting minutes, observations etc.. — to monitor what current practice is, link it with the decisions made using the feedback, emergent conditions and sensemaking from the previous stages and then tracking what happens next.

Attractors: Not all of the things that emerge are useful and not all feedback is supportive of advancing a program’s goals. Attractors are patterns of activity that generate emergent behaviours and ‘attract’ resources — attention, time, funding — in a program. Developmental evaluators and their program clients seek to find attractors that are beneficial to the organization and amplify those to ensure sustained or possibly greater benefit. Negative (unhelpful) attractors do the opposite and thus knowing when those form it enables program staff to dampen their effect by adapting activities to adjust and shift these activities.

Self-organization and Co-evolution: Tied with all of this is the concepts of self-organization and co-evolution. The previous concepts all come together to create systems that self-organize around these attractors. Complex systems do not allow us to control and predict behaviour, but we can direct actions, shape the system to some degree, and anticipate possible outcomes. Co-evolution is a bit of a misnomer in that it refers to the principles that organisms (and organizations) operating in complex environments are mutually affected by each other. This mutual influence might be different for each interaction, differently effecting each organization/organism as well, but it points to the notion that we do not exist in a vacuum. For evaluators, this means paying attention to the system(s) that the organization is operating in. Whereas with normative, positivist science we aim to reduce ‘noise’ and control for variation, in complex systems we can’t do this. Network research, system mapping tools like causal loop diagrams and system dynamics models, gigamapping, or simple environmental scans can all contribute to the evaluation to enable the developmental evaluator to know what forces might be influencing the program.

Ways of thinking about complexity

One of the most notable challenges for developmental evaluators and those seeking to employ developmental evaluation is the systems thinking about complexity. It means accepting non-linearity as a key principle in viewing a program and its context. It also means that context must be accounted for in the evaluation design. Simplistic assertions about methodological approaches (“I’m a qualitative evaluator / I’m a quantitative evaluator“) will not work. Complex programs require attention to the macro level contexts and moment-by-moment activities simultaneously and at the very least demand mixed method approaches to their understanding.

Although much of the science of complexity is based on highly mathematical, quantitative science, it’s practice as a means of understanding programs is quantitative and qualitative and synthetic. It requires attention to context and the nuances that qualitative methods can reveal and the macro-level understanding that quantitative data can produce from many interactions.

It also means getting away from language about program improvements towards one of development and that might be the hardest part of the entire process. Development requires adaptation to the program, thought and rethinking about the program’s resources and processes that integrate feedback into an ongoing set of adjustments that perpetuate through the life cycle of the program. This requires a different kind of attention,  methods, and commitment from both a program and its evaluators.

In the coming posts I’ll look at how this attention gets realized in designing and redesigning the program as we move into developmental design.

 

complexityeducation & learningsystems sciencesystems thinking

A toolkit for toolsets, skillsets and a mindset

Image

For the last few years Censemaking has been a forum for exploring ideas around complexity, systems, design and social innovation. It has been a space for ideas and considering some of social and evaluative ramifications of complexity as it plays out in human systems.

This long-form blog has allowed for in-depth reflection on the issues that influence innovation and health in human systems.

Today I am launching a sister blog on the CENSE Research + Design site dedicated to exploring the methods and tools that can help us understand how to impact these systems. This new blog will feature short-form, practice-oriented articles that are aimed at building or augmenting the toolkit of the social innovation practitioner.

Censemaking will continue to explore issues in depth and will occasionally refer readers to the CENSE toolkit blog as a means of building the mindset, skillset and toolset of those readers interested in navigating complexity and designing innovation. The focus will be on systems science methods, design and design thinking tools, strategy, behavioural and applied social science techniques, and program evaluation.

I look forward to engaging readers through both venues and learning from you and with you.

Thanks for reading.

Photo Toolkit by Nick Farnhill used under Creative Commons Licence from Flickr.

complexitydesign thinkingeducation & learningemergenceevaluation

Evaluating Social Innovation For Social Impact

How do the innovation letters line up?

Earlier this week I has the pleasure of attending talks from Bryan Boyer from the Helsinki Design Lab and learning about the remarkable work they are doing in applying design to government and community life in Finland. While the focus of the audience for the talks was on their application of design thinking, I found myself drawn to the issue of evaluation and the discussion around that when it came up.

One of the points raised was that design teams are often working with constraints that emphasize the designed product, rather than its extended outcome, making evaluation a challenge to adequately resource. Evaluation is not a term that frequents discussion on design, but as the moderator of one talk suggested, maybe it should.

I can’t agree more.

Design and Evaluation: A Natural Partnership

It has puzzled me to no end that we have these emergent fields of practice aimed at social good — social finance and social impact investing, social innovation, social benefit (PDF)– that have little built into their culture to assess what kind of influence they are having beyond the basics. Yet, social innovation is rarely about simple basics, it’s influence is likely far larger, for better or worse.

What is the impact being invested in? What is the new thing being created of value? and what is the benefit and for whom? What else happened because we intervened?

Evaluation is often the last thing to go into a program budget (along with knowledge translation and exchange activities) and the first thing to get cut (along with the aforementioned KTE work) when things go wrong or budgets get tightened. Regrettably, our desire to act supersedes our desire to understand the implication of those actions. It is based on a fundamental idea that we know what we are doing and can predict its outcomes.

Yet, with social innovation, we are often doing things for the first time, or combining known elements into an unknown corpus, or repurposing existing knowledge/skills/tools into new settings and situations. This is the innovation part. Novelty is pervasive and with that comes opportunities for learning as well as the potential for us to good as well as harm.

An Ethical Imperative?

There are reasons beyond product quality and accountability that one should take evaluation and strategic design for social innovation seriously.

Design thinking involves embracing failure (e.g,  fail often to succeed sooner is the mantra espoused by product design firm IDEO) as a means of testing ideas and prototyping possible outcomes to generate an ideal fit. This is ideal for ideas and products that can be isolated from their environment safely to measure the variables associated with outcomes, if considered. This works well with benign issues, but can get more problematic when such interventions are aimed at the social sphere.

Unlike technological failures in the lab, innovations involving people do have costs. Clinical intervention trials go through a series of phases — preclinical through five stages to post-testing — to test their impact, gradually and cautiously scaling up with detailed data collection and analysis accompanying each step and its still not perfect. Medical reporter Julia Belluz and I recently discussed this issue with students at the University of Toronto as part of a workshop on evidence and noted that as complexity increases with the subject matter, the ability to rely on controlled studies decreases.

Complexity is typically the space where much of social innovation inhabits.

As the social realm — our communities, organizations and even global enterprises — is our lab, our interventions impact people ‘out of the gate’ and because this occurs in an inherently a complex environment, I argue that the imperative to evaluate and share what is known about what we produce is critical if we are to innovate safely as well as effectively. Alas, we are far from that in social innovation.

Barriers and Opportunities for Evaluation-powered Social Innovation

There are a series of issues that permeate through the social innovation sector in its current form that require addressing if we are to better understand our impact.

  1. Becoming more than “the ideas people”: I heard this phrased used at Bryan Boyer’s talk hosted by the Social Innovation Generation group at MaRS. The moderator for the talk commented on how she had wished she’d taken more interest in statistics in university because they would have helped in assessing some of the impact fo the work done in social innovation. There is a strong push for ideas in social innovation, but perhaps we should also include those that know how to make sense and evaluate those ideas in our stable of talent and required skillsets for design teams.
  2. Guiding Theories & Methods: Having good ideas is one thing, implementing them is another. But tying them both together is the role of theory and models. Theories are hypotheses about the way things happen based on evidence, experience, and imagination. Strategic designers and social innovators rarely refer to theory in their presentations or work. I have little doubt that there are some theories being used by these designers, but they are implicit, not explicit, thus remaining unevaluable and untestable or challenged by others. Some, like Frances Westley, have made theories guiding her work explicit, but this is a rarity. Social theory, behaviour change models and theories of discovery beyond just use of Rogers’ Diffusion of Innovation theory must be introduced to our work if we are to make better judgements about social innovation programs and assess their impact. Indeed, we need the kind of scholarship that applies theory and builds it as part of the culture of social innovation.
  3. Problem scope and methodological challenges with it. Scoping social innovation is immensely wide and complicated task requiring methods and tools that go beyond simple regression models or observational techniques. Evaluators working social innovation require a high-level understanding of diverse methods and I would argue cannot be comfortable in only one tradition of methods unless they are part of a diverse team of evaluation professionals, something that is costly and resource intensive. Those working in social innovation need to live the very credo of constant innovation in methods, tools and mindsets if they are to be effective at managing the changing conditions in social innovation and strategic design. This is not a field for the methodologically disinterested.
  4. Low attendance to rigor and documentation. When social innovators and strategic designers do assess impact, too often there is a low attention to methodological rigor. Ethnographies are presented with little attention to sampling and selection or data combination, statistics are used sparingly, and connections to theory or historical precedent are absent. Of course, there are exceptions, but this is hardly the rule. Building a culture of innovation within the field relies on the ability to take quality information from one context and apply it to another critically and if that information is absent, incomplete or of poor quality the possibility for effective communication between projects and settings diminishes.
  5. Knowledge translation in social innovation. There are few fora to share what we know in the kind of depth that is necessary to advance deep understanding of social innovation, regularly. There are a lot of one-off events, but few regular conferences or societies where social innovation is discussed and shared systematically. Design conferences tend towards the ‘sage on the stage’ model that favours high profile speakers and agencies, while academic conferences favour research that is less applied or action-oriented. Couple that with the problem of client-consultant work that is common in social innovation areas and we get knowledge that is protected, privileged or often there is little incentive to add a KT component to the budget.
  6. Poor cataloguing of research. To the last point, we have no formalized methods of determining the state-of-the-art in social innovation as research and practice is not catalogued. Groups like the Helsinki Design Lab and Social Innovation Generation with their vigorous attention to dissemination are the exception, not the rule. Complicating matters is the interdisciplinary nature of social innovation. Where does one search for social innovation knowledge? What are the keywords? Innovation is not a good one (too general), yet neither is the more specialized disciplinary terms like economics, psychology, geography, engineering, finance, enterprise, or health. Without a shared nomenclature and networks to develop such a project the knowledge that is made public is often left to the realm of unknown unknowns.

Moving forward, the challenge for social innovation is to find ways to make what it does more accessible to those beyond its current field of practice. Evaluation is one way to do this, but in pursuing such a course, the field needs to create space for evaluation to take place. Interestingly, FSG and the Center for Evaluation Innovation in the U.S. recently delivered a webinar on evaluating social innovation with the principle focus being on developmental evaluation, something I’ve written about at length.

Developmental evaluation is one approach, but as noted in the webinar : an organization needs to be a learning organization for this approach to work.

The question that I am left with is: is social innovation serious about social impact? If it is, how will it know it achieved it without evaluation?

And to echo my previous post: if we believe learning is essential to strategic design we must ask: How serious are we about learning? 

Tough questions, but the answers might illuminate the way forward to understanding social impact in social innovation.

* Photo credit from Deviant Art innovation_by_genlau.jpg used under Creative Commons Licence.

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking or Design Thinking + Action?

 

There is a fine line between being genuinely creative, innovative and forward thinking and just being trendy.

The issue is not a trivial one because good ideas can get buried when they become trendy, not because they are no longer any good, but because the original meaning behind the term and its very integrity get warped by the influx of products that poorly adhere to the spirit, meaning and intent of the original concepts. This is no more evident than in the troika of concepts that fit at the centre of this blog: systems thinking, design thinking and knowledge translation. (eHealth seems to have lost some its lustre).

This issue was brought to light in a recent blog post by Tim Brown, CEO of the design and innovation firm IDEO. In the post, Brown responds to another post on the design blog Core77 by Kevin McCullagh that spoke to the need to re-think the concept of design thinking and whether it’s popularity has outstripped its usefulness. It is this popularity which is killing the true discipline of design by unleashing a wave of half-baked applications of design thinking on the world and passing it off as good practice.

There’s something odd going on when business and political leaders flatter design with potentially holding the key to such big and pressing problems, and the design community looks the other way.

McCullagh goes on to add that the term design thinking is growing out of favour with designers themselves:

Today, as business and governments start to take design thinking seriously, many designers and design experts are distancing themselves from the term.While I have often been dubbed a design thinker, and I’ve certainly dedicated my career to winning a more strategic role for design. But I was uncomfortable with the concept of design thinking from the outset. I was not the only member of the design community to have misgivings. The term was poorly defined, its proponents often implied that designers were merely unthinking doers, and it allowed smart talkers with little design talent to claim to represent the industry. Others worried about ‘overstretch’—the gap between design thinkers’ claims, and their knowledge, capabilities and ability to deliver on those promises.

This last point is worth noting and it speaks to the problem of ‘trendiness’. As the concept of design thinking has become commonplace, the rigor in which it was initially applied and the methods used to develop it seem to have been cast aside, or at least politely ignored, in favour of something more trendy so that everyone and anyone can be a design thinker. And whether this is a good thing or not is up for debate.

Tim Brown agrees, but only partially, adding:

I support much of what (McCullagh) has to say. Design thinking has to show impact if it is to be taken seriously. Designing is as much about doing as it is about thinking. Designers have much to learn from others who are more rigorous and analytical in their methodologies.

What I struggle with is the assertion that the economic downturn has taken the wind out of the sails of design thinking. My observation is just the opposite. I see organizations, corporate or otherwise, asking broader, more strategic, more interesting questions of designers than ever before. Whether as designers we are equipped to answer these questions may be another matter.

And here in lies the rub. Design thinking as a method of thinking has taken off, while design thinking methodologies (or rather, their study and evaluation) has languished. Yet, for design thinking to be effective in producing real change (as opposed to just new ways of thinking) its methods need to be either improved, or implemented better and evaluated. In short: design thinking must also include action.

I would surmise that it is up to designers, but also academic researchers to take on this challenge and create opportunities to develop design thinking as a disciplinary focus within applied research faculties. Places like the University of Toronto’s Rotman School of Business and the Ontario College of Art and Design’s Strategic Innovation Lab are places to start, but so should schools of public health, social work and education. Only when the methods improve and the research behind it will design thinking escape the “trendy” label and endure as a field of sustained innovation.

Uncategorized

Knowledge translation in public health: Progress or doing the wrong things righter?

 

Knowledge translation has evolved from a term in relative obscurity to something that has become commonplace in much of the discussion on health care and public health. At its heart, knowledge translation is:

The Canadian Institutes of Health Research (CIHR) has referred to knowledge translation as “a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system. The very fact that this term has gained visibility in health research represents a major shift in our priorities. In the past, considerable amounts of money have been spent on clinical research while relatively little attention has been paid to ensuring that the findings of research were captured by its potential beneficiaries. The biomedical and applied research enterprise represents an annual investment of $55 billion US worldwide (Haines, A and Hayes, B., 1998)!”

The reasons for this interest go beyond just money towards population health impact.

It has been estimated that it takes more than 17 years to translate evidence generated from discovery into health care practice (Balas & Boren, 2000) and of that evidence base, only 14 per cent of it is believed to enter day-to-day clinical practice (Westfall, Mold & Fagan, 2007). Some believe that this is an under-estimation and call for considerably more research in the area of dissemination and implementation if evidence-informed practice is to ever be achieved (Trochim, 2010).

This past week the NIH and its Office of Behavioral and Social Science Research held its third conference on the Science of Dissemination and Implementation with a focus on methods and measurement. The conference was a success from my point of view in that it provided a forum for discussion and dialogue on various models, methodologies and challenges, however one issue that wasn’t covered much was related to issues of reliability vs. validity. Our traditional models of research emphasize the former (the degree to which the same activity produces consistent results) at the expense of the latter (whether the findings translate into real differences or changes in the world) and this fundamental tension sits at the cornerstone of knowledge translation. This is best demonstrated in the appalling rates of uptake of most clinical practice guidelines into everyday health care activities [see here for one of many examples].  Russ Glasgow and others have argued that we need to do much more in shaping research that has external validity.

The elephant in the large room was this issue and the more that we continue to ignore it, the more we risk doing what management theorist Russell Ackoff described as “doing the wrong things righter.” That is, we continue to develop evidence in a manner that we hope, if it is just good enough, uses the best methods possible, and boldly proclaims the “truth” people will listen. Yet, the message from this conference was that we don’t even know what people are listening to in the first place, let alone what they do with what they listen to. Are the messages not getting through? Are they getting through, but being mis-understood (or not understood at all)? Or are they being ignored altogether? Or, as I have seen, are they being listened to, but then discarded when they are found to be impractical for their context.

An example of this is web-based tools for collaboration or e-communities of practice. The idea of using tools like Facebook, Twitter, and LinkedIn all sound great in theory, but if your local public health unit won’t allow you to use any of these tools on the job, what good does it do? If you don’t have the bandwidth available in your local community to watch videos online without it chopping up and taking a long time to down how reasonable is it to expect that YouTube will have anything to offer?

These contextual questions are rarely looked at. It was encouraging to hear people like Allan Best and colleagues speak of systems models and the need for more qualitative (i.e., contextually-focused) research , which was well-received, but that was about it. A much wider dialogue about understanding the context in which knowledge is used and translated or not would do much to determine whether we’re making progress or just doing the same wrong things only better.

If you’re in the Toronto area and interested in discussing this topic further, a Lunch-and-Learn event is being held on March 25th from 12-1pm at the Health Sciences Building at the University of Toronto as part of the CoNEKTR series hosted at the Dalla Lana School of Public Health.

complexityeducation & learningpublic healthresearchsocial media

Storytelling in the Age of Twitter

 

Storytelling has been on my mind this week. Not the kind of stories that many of us had a children like those in Mother Goose, but rather the ones that we more often tell through chance encounters in the hallway or Tweet about over the Internet. However, like Mother Goose many of the stories we tell include narratives that feature archetypes and draw on a long history of shared knowledge between the storyteller and her or his audience. Unlike in cultures where storytelling is fashioned in a manner that requires sustained attention and considerable skill and practice (think of the many First Nations & Aboriginal communities worldwide or the Irish Seanachaidhean), tools like Twitter, blogs and Facebook enable us to tell stories in new, short form ways to audiences we might not even know about. Sorting through the tweets of 150 different people per day requires a process of sensemaking that is different from those used to ascertain meaning in a long form story. Both are valuable.

Although it is tempting to privilege long-form storytelling, the kind found in essays, feature films, and books, it may be those tweets that better fit with our cognitive tendencies for sensemaking. If you think about your average day, you might interact with a few dozen people face-to-face and perhaps many dozens more through your social networks. How many of those interactions featured a full-fledged story; one that had a clear start, middle, end and coherence that could only be gathered from the story itself, not past relationships with the storyteller? Probably very few. Instead, we much more often speak, write, and even film in narrative fragments; small chunks co-constructed and contextually bound. Think about any buzzword or catch phrase and you can see this in action. From ‘whassup‘ to ‘getting Kanyed‘, these terms have meanings that go far beyond the obvious and can be conveyed with one or two words. Twitter represents this very well with its 140 character limit.

This past week I spent three days with a great group of people getting learning about complexity-based approaches to sensemaking using narrative fragments, software and a variety of facilitation techniques aimed at taking the science of complexity into the practical change realm with the folk at Cognitive Edge. What this accreditation process did was provide a theory-based set of tools and strategies for making sense of vast amounts of information in the form of stories and narrative fragments for purposes of decision-making and research. What this method does is acknowledge the complex spaces in which many organizational decisions are made and, through the Cynefin framework, help groups make sense of the many bits of knowledge that they generate and share that is often unacknowledged. It provides a theoretically-grounded and data-driven method of making sense of large quantities of narrative fragments; the kind we tell in organizations and communities.

From a systems perspective, viewing knowledge exchange and generation through the narrative fragments that we produce is far more likely to lead to insights about how the system operates and developing anticipatory guidance for decision-making than waiting for fully-formed stories to appear and analyzing those. This, like nearly everything in systems thinking, requires a mind-shift from the linear and whole to the non-linear and fragmented. But thanks to Michael Cheveldave and Dave Snowden and their team this non-linearity need not be incoherent. I’d recommend checking out their amazing website for a whole list of novel and open-source methods of applying cognitive and complexity science to problem identification and intelligence.

Thanks Michael and the Toronto knowledge workers group for a great three days! I’m looking at my tweets in a whole new way.

complexityresearchsystems sciencesystems thinking

Mindful Systems

 

The benefits of standing still and looking around at the systems around us never cease to reveal themselves.

Mindfulness is something that is most often associated with individuals. Mindfulness is a pillar of Buddhist practice and is increasingly being used in clinical settings to help people deal with stress and pain.

Mindfulness sometimes get unfairly linked to individuals, groups and movements that, for lack of a better term, could be described as ‘flaky’. Its association with many spiritual movements can also be problematic for those who are looking for something more aligned with science and less about religion or spirituality. Yet, the spiritual and scientific benefits of mindfulness need not be incompatible. Google, while innovative and often unusual in the way it runs its business, is certainly not flaky. As a company, it understands the power of mindfulness and has hosted a few talks on its application to everyday life and its neuroscientific foundations and benefits. For companies like Google, promoting mindfulness yields health benefits to its individual staff members, but also to its bottom line because being mindful as a company allows them to see trends and the emergence of new patterns in how people use the Internet and search for information. Indeed, one could say that Google with its search engine and productivity tools could be the ultimate mindfulness company, aiding us to become aware of the world around us (on the Internet anyway).

We are often profoundly ignorant of the systems that we are a part of and while the idea of having us all sit and mediate might sound appealing (particularly those of us who could use a moment of peace!) it is not a reasonable proposition. One of the things that meditation does is enable the mediator to become aware of themselves and their surroundings often through a type of mental visualization. Visualization allows the observer to see the relationships between entities in a system, their proximity, and the extended relationships beyond themselves. In systems research and evaluation, this might be done through the application of social network analysis or a system dynamics model. Through these kinds of tools that allow us to enhance visualization potential of systems, this is almost akin to creating a mindful systems thinking tool.

My colleague Tim Huerta and I have been developing methods and strategies to incorporate social network analysis into organizational decision making and published a paper in 2006 on how this could be done to support the development of communities of practice in tobacco control.  I’m also working on creating a system dynamics model of the relationships within the gambling system in Ontario with David Korn and Jennifer Reynolds.

By creating visuals of what the system looks like consciousness raising takes place and the invisible connections become visible. And by making things visible the impact, reach, scope and potential opportunities for collaboration and action are made aware. And with awareness comes insight into the connections between actions and consequences (past, current and potential) and that allows us to strategize ways to minimize or amplify such effects as necessary.