Month: November 2010

behaviour changecomplexityeHealthinnovationknowledge translation

The Face-to-Face Complexity of eHealth & Knowledge Exchange

The Public Health Agency of Canada‘s 2010 Knowledge Forum on Chronic Disease was held last night today in Ottawa with the focus on social media. The invitation-only affair was designed to bring together a diverse array of researchers, practitioners, policy developers, consultants and administrators who work with social media in some capacity. There were experts and non-experts alike gathered to learn about what the state of the art of social media is and how it can support public health. By state of the art, I refer not to the technological side of things, but rather the true art of public health, much like that discussed earlier this year at the University of Toronto.

Last night began with a presentation from Leanne Labelle that got us all thinking about how social media is radically different in the speed of its adoption and breadth of its social impact drawing inspiration from this video from Eric Qualman’s Socialnomics website.

Today we got down to business and started working through some of the issues that we face as a field when adopting social media. I would probably consider myself among the most experienced users in the audience, yet still gained so much from the day. Although I learned some things about how to use social media in new ways, what I learned most was how others use it and what struggles they have. This is always a useful reminder.

What stuck out was a presentation and related discussion from Christopher Wilson from the University of Ottawa’s Centre on Governance and a consultant on governance issues. In speaking about the challenges of doing collaboration, Christopher pointed to the problems of a ‘one-size fits all’ strategy using a diagram illustrating the fundamental differences between engagement at a small scale (under 25 people) and what is the mass collaboration that folks like Clay Shirky, Don Tapscott, and others write about. His diagram looks like this:

Technology Spectrum of Social Collaboration by Christopher Wilson

What Wilson stressed to the audience was the role that complexity plays in all of this. Specifically, he stated:

The more complex and interdependent things become, the more people need to be aware of the changing context and the changes in shared understanding.

As part of this, groups are required to engage in ways that enable them to deal with this complexity. In his experience, this can’t be done exclusively online. He further stated:

As complexity increases, the need for offline engagement increases.

I couldn’t agree more. In my work with community organizing and eHealth promotion, I’ve found the most effective means of fostering collaboration is to blend the two forms of knowledge generation and exchange together. The model that my research team and I developed is called the CoNEKTR (Complexity, Networks, EHealth, and Knowledge Translation Research Model).

This model combines both face-to-face methods of organizing and ideation, with a social media strategy that connects people together between events. The CoNEKTR model has been applied in many forms, but in each case the need to have ways to use the power of social media and rich media together with in-person dialogue has been front and centre. Using complexity science principles to guide the process and powered by social media and face-to-face engagement, the power to take what we know, contextualize it, and transform it into something we can act on seems to me the best way forward in dealing with problems of chronic disease that are so knotted and pervasive, yet demand rapid responses from public health.

complexityemergenceknowledge translationpublic healthsystems science

Evidence Democratization in Complex Systems

In the social media and marketing world there is a concept called “brand democratization”, which refers to the notion of having your customers contribute to and partly shape a brand’s identity. Marty Neumeier, who has written extensively on branding, asserts that a brand isn’t what the producers of the product say it is, but rather what the customers say it is.

The notion that the meaning and identity of a brand is outside of the control of those that develop a product is unsettling for many in business. Social media only amplifies this feeling. A social media consultant I work with once relayed a story about how a client (a big company) once asked how they could use social media to control their message more effectively and how unsettled they were to hear that “control” is not what social media is about and frustrated at the thought that they were no longer in the driver’s seat.

To some extent, we are seeing the same thing take place in the health sector, particularly in areas of great complexity. As complex chronic conditions become more prevalent and the number of people with multiple chronic conditions represent a larger proportion of the population (a near inevitability in societies with aging demographics and better health care), complexity will gain a greater place in our health care policy discourse.

Evidence-based medicine holds very clearly a sense of what is good and not-so-good quality content. Hierarchies abound and medical students are taught early on about what the best evidence is and how it is generated. The problem with the “best evidence” in health care is that it typically comes from studies focused within a rather narrow contextual band of activity using designs that aim for simplicity, not complexity. In standard research programs the aim is to reduce variation, not embrace it.

This is nearly the opposite of complex systems, where variation is, to a point, the key to learning and adaptation. Making sense within a complex system requires deep appreciation of context and an understanding of one’s position within the system. However, because such positions are relative, the ability to hold “expertise” in a system is limited. Indeed, there are many who occupy positions where they have great knowledge and the ability to re-interpret and generate evidence as opposed to only a few.

Does this create evidence democratization? When there is so much information from many sources, a need for contextualized feedback and many actors with useful experience and knowledge occupying different positions related to a problem, it seems that the answer is “yes”.

Social media operates within an environment of complexity due to the widescale involvement of diverse actors working in a coordinated, yet sell-organized manner, across many boundaries within multiple overlapping contexts. What is “true” within a social media sphere is only made so by the application of that knowledge within a particular context. A blog only has meaning to its creator and her or his audience, not the entire system. Yet, the actors engaged with that content may take it and rework it, retweet it, or post it in a new environment, where it is transformed into something else. If that “something else” has value, it lives onward.

In health contexts, that value might be very different depending on the condition and the manner in which that knowledge is applied. From a traditional evidence-based health approach, this is utterly terrifying. How can evidence be re-interpreted? What are the harms associated with taking something from one context and applying them to another for which it wasn’t designed? These are concerns, yet it is hard to imagine that there isn’t also some potential value in exploring ways in which this plays out in chronic and complex conditions given that the evidence is rarely that “hard” going in.

Are we on the cusp of a new wave of evidence democratization? And if so, are we going to be healthier as a result of it? More innovative? Or in deep trouble?

design thinkingeducation & learningevaluationinnovation

More Design Thinking & Evaluation

Capturing Design in Evaluation (CameraNight by Dream Sky, Used under Creative Commons License)

On the last day of the American Evaluation Association conference, which wrapped up on Saturday, I participated in an interactive session on design thinking and evaluation by a group from the Savannah College of Art and Design.

One of the first things that was presented was some of the language of design thinking for those in the audience who are not accustomed to this way of approaching problems (which I suspect was most of those in attendance).

At the heart of this was the importance of praxis, which is the link between theory, design principles and practice (see below)

Design Thinking

As part of this perspective is the belief that design is less a field about the creation of things on their own, but rather a problem solving discipline.

Design is a problem solving discipline

When conceived of this way, it becomes easier to see why design thinking is so important and more than a passing fad. Inherent in this way of thinking are principles that demand inclusion of multiple perspectives on the problem, collaborative ideation, and purposeful wandering through a subject matter.

Another is to view design as serious play to support learning.

Imagine taking the perspective of design as serious play to support learning?

The presenters also introduced a quote from Bruce Mau, which I will inaccurately capture here, but is akin to this:

One of the revelations in the studio is that life is something that we create every single day. We create our space and place.

Within this approach is a shift from sympathy with others in the world, to empathy. It is less about evaluating the world, but rather engaging with it to come up with new insights that can inform its further development. This is really a nod (in my view) to developmental evaluation.

The audience was enthralled and engaged and, I hope, willing to take the concept of design and design thinking further in their work as evaluators. In doing so, I can only hope that evaluation becomes one of the homes for design thinking beyond the realm of business and industrial arts.

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking & Evaluation

Design Thinking Meets Evaluation (by Lumaxart, Creative Commons Licence)

This morning at the American Evaluation Association meeting in San Antonio I attended a session very near and dear to my heart: design thinking and evaluation.

I have been a staunch believer that design thinking ought to be one of the most prominent tools for evaluators and that evaluation ought to be one of the principal components of any design thinking strategy. This morning, I was with my “peeps”.

Specifically, I was with Ching Ching Yap, Christine Miller and Robert Fee and about 35 other early risers hoping to learn about ways in which the Savannah College of Art and Design (SCAD) uses design thinking in support of their programs and evaluations.

The presenters went through a series of outlines for design thinking and what it is (more on that in a follow-up post), but what I wanted to focus on here was the way in which evaluation and design thinking fits together more broadly.

Design thinking is an approach that encourages participatory engagement in planning and setting out objectives, as well as in ideation, development, prototyping, testing, and refinement. In evaluation terms, it is akin to action research and utilization-focused evaluation (PDF). But perhaps its most close correlate is with Developmental evaluation (DE). DE is an approach that uses complexity-science concepts to inform an iterative approach to evaluation that is centred on innovation, the discovery of something new (or adaptation of something into something else) and the application of that knowledge to problem solving.

Indeed, the speakers today positioned design thinking as a means of problem solving.

Evaluation , at least DE, is about problem solving by collecting the data used as a form of feedback to inform the next iteration of decision making. It also is a form of evaluation that is intimately connected to program planning.

What design thinking offers is a way to extend that planning in new ways that optimizes opportunities for feedback, new information, participation, and creative interaction. Design thinking approaches, like the workshop today, also focuses on people’s felt needs and experiences, not just their ideas. In our session today, six audience members were recruited to play the role of either three facets of a store clerk or three facets of a customer — the rational, emotional and executive mind of each. A customer comes looking for a solution to a home improvement/repair problem, not sure of what she needs, while the store clerk tries to help.

What this design-oriented approach does is greatly enhance the participant’s sense of the whole, what the needs and desires and fears both parties are dealing with, not just the executive or rational elements. More importantly, this strategy looks at how these different components might interact by simulating a condition in which they might play out. Time didn’t allow us to explore what might have happened had we NOT done this and just designed an evaluation to capture the experience, but I can confidently say that this exercise got me thinking about all the different elements that could and indeed SHOULD be considered if trying to understand and evaluate an interaction is desired.

If design thinking isn’t a core competency of evaluation, perhaps we might want to consider it.

 

 

complexityevaluationresearchsystems thinking

Systems Thinking, Logic Models and Evaluation

San Antonio at Night, by Corey Leopold (CC License)

The American Evaluation Association conference is on right now in San Antonio and with hundreds of sessions spread over four days it is hard to focus on just one thing. For those interested in systems approaches to evaluation, the conference has had a wealth of learning opportunities.

The highlight was a session on systems approaches to understanding one of evaluation’s staples: the program logic model.

The speakers, Patricia Rogers from the Royal Melbourne Institute of Technology, and consultants Richard Hummelbrunner and Bob Williams spoke to the challenges posed with the traditional forms of logic models by looking at the concepts of beauty, truth and justice. These model forms tend to take the shape of the box model (the approach most common in North America), the outcome hierarchy model, and the logic framework, which is popular in international development work.

The latter model was the focus of Hummelbrunner’s talk, which critiqued the ‘log frame’ approach and showed how its highly structured approach to conceptualizing programs tends to lead to a preoccupation with the wrong things and a rigidity in the way programs are approached. They work well in environments that are linear, straightforward, and in situations where funders need simple, rapid overviews of programs. But as Hummelbrunner says:

Logframes fail in messy environments

The reason is often that people make assumptions of simplicity when really such programs are complicated or complex. Patricia Rogers illustrated ways of conceptualizing programs using the traditional box models, but showing how different program outcomes could emerge from one program, or that there may be the need to have multiple programs working simultaneously to achieve a particular outcome.

What Rogers emphasized was the need for logic models to have a sense of beauty to it.

Logic models need to be beautiful, to energize people. It’s can’t just be the equivalent of a wiring diagram for a program.

According to Rogers, the process of developing a logic model is most effective when it maintains harmony between the program and the people within it. Too often such model development processes are dispiriting events rather than exciting ones.

Bob Williams concluded the session by furthering the discussion of beauty, truth and justice, by expanding the definitions of these terms within the context of logic models. Beauty is the essence of relationships, which is what logic models show. Truth is about providing opportunities for multiple perspectives on a program. And a boundary critique is a an opportunity for ethical decision making.

On that last point, Williams made some important arguments about how, in systems related research and evaluation, the act of choosing a boundary is a profound ethical decision. Who is in, who is out, what counts and what does not are all critical questions to the issue of justice.

To conclude, Williams also challenged us to look at models in new ways, asking:

Why should models be the servant of data, rather than have data serve the models?

In this last point, Williams highlights the current debates within the knowledge management community, which is dealing with a decade where trillions of points of data have been generated to make policy and programming decisions, yet better decisions still elude us. Is more data, better?

The session was a wonderful puctuation to the day and really advanced the discussion on something so fundamental as logic models, yet took us to a new set of places by considering them as things of artful design, beauty, ethical decision making tools, and vehicles for exploring the truths that we live. Pretty profound stuff for a session on something seemingly benign as a planning tool.

The session ended with a great question from Bob Williams to the audience that speaks to why systems are also about the people within them and emplored evaluators to consider:

Why don’t we start with the people first instead of the intervention, rather than the other way around like we normally do?

evaluationsystems thinking

American Evaluation Association Conference

Over the next few days I’ll be attending the American Evaluation Association conference in San Antonio, Texas. The conference, the biggest gathering of evaluators in the world. Depending on the Internet connections, I will try to do some live tweeting from my @cdnorman and some blogging reflections along the way, so do follow along if you’re interested. In addition to presenting some of the work that I’ve been engaged in on team science with my colleagues at the University of British Columbia and Texas Tech University, I will be looking to connect more with those groups and individuals doing work on systems evaluation and developmental evaluation with an eye to spotting the trends and developments (no pun intended) in those fields.

Evaluation is an interesting area to be a part of. It has no disciplinary home, a set of common practices, but much diversity as well and brings together a fascinating blend of people from all walks of professional life.

Stay tuned.

complexityemergencepublic healthsystems sciencesystems thinking

Recombination: The Missing Link Between Linear and Non-Linear Views of Change

I teach a course in health behaviour change and one in systems thinking perspectives on public health. Both courses complement each other and both deal with change. However, most of the major theories of behaviour change deal with the subject in a straightforward, linear manner. Models and theories like the Health Belief Model, Theory of Reasoned Action, and Social Cognitive Theory all have elements explicit or implicit to them that suggest change occurs in a largely linear manner from problem state to desired state.

One of the more popular models of change is the Transtheoretical Model, which included the concept of Stages of Change. Developed by James Prochaska and colleagues at the University of Rhode Island (and others), the model has become widely popular and used all over the world to guide change efforts. The problem is that the evidence for its effectiveness, despite the logic it brings with it, is weak.

Robert West, the editor of the journal Addiction, and others, issued a rather stinging set of criticisms against the Transtheoretical Model’s Stages of Change concept, pointing to the evidence that suggests that as many (if not more) people quit smoking or behaviors like that with no apparent plan in place. “It just happened” .

Indeed, the data suggests that Stages of Change is not that strong as a predictor of eventual change, yet its popularity suggests something that goes beyond evidence. At its root is the idea of “ready, set, go” and taps into our deep-seated interests in making plans and moving ahead in a straightforward manner. In short, it fits linear thinking to a tee.

Over time, proponents of the Stages of Change theory and related models and theories have asserted that people do move forwards and backwards through the stages and that it is not simply a one-way view of change, but in both cases the end is still some form of linear trajectory.

What makes behaviour change theories like the TTM and others problematic from the perspective of complexity is that they are linear. Yet, linearity is the way we define the problems in the first place. These theories are all based on some form of cognitive-rational foundation that take at its core the idea that information is the starting point for change and that the way information is perceived and worked through will serve as a touchpoint for further motivational activities.

What is embedded within this assumption is the idea that, once configured, information is organized in a relatively stable, consistent manner. What it does not do is account for the ways in which our memories, circumstance, situation, and the addition of new information can only only change what we know, but also the way in which we know it. Thus, recombination of information leads to new insights and activities, not all of which are necessarily in support of the trajectory that was initiated.

Richard Resincow and Scott Page start to probe some of this terrain in their article published a couple of years ago looking at quantum change. The article, which was widely discussed, challenges the very notion that the approach we take to behaviour change is misaligned with much of what we know about complex adaptive systems. And to this end, the human mind and body is indeed a complex adaptive system in many respects. Certainly our social worlds fit this description.

If this is the case, and we take this idea that recombination of information can and does occur, it has profound implications for how we develop social institutions and the way in which we support individuals looking to make changes. It means not expecting that changes will stay in place, but rather always anticipating the possibility that something might shift and dramatic transformations could occur.

Flexible strategies, adaptive strategies and those that attend to context and the constant, dynamic flow of information are those that will provide more useful models for change in this worldview. It might now repudiate the models we use now, but it certainly casts new light on the directionality of change that they invoke. And in simply shifting those arrows around, we open possibility for understanding change in a wider way that might eventually lead us to one that takes complexity into account more fully, and learning.