Knowledge translation — and its affiliated terms knowledge exchange, knowledge integration and knowledge mobilization — was coined to describe a process of taking what is known into what is done in health across the spectrum of science, practice, policy and the public’s health. As health issues become more complex due to the intertwining of demographics, technology, science, and cultural transformations the need to better understand evidence and its impact on health has never been higher. Questions remain: has demand met supply? How are the health professions dealing with this equation?
The Canadian Institutes of Health Research (CIHR), one of the earliest champions of the concept of knowledge translation in research, define it as:
a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically-sound application of knowledge to improve the health of Canadians (sic), provide more effective health services and products and strengthen the health care system.
These ideas are expanded below:
Synthesis – Synthesis, in this context, means the contextualization and integration of research findings of individual research studies within the larger body of knowledge on the topic. A synthesis must be reproducible and transparent in its methods, using quantitative and/or qualitative methods. It could take the form of a systematic review, follow the methods developed by the Cochrane Collaboration, result from a consensus conference or expert panel or synthesize qualitative or quantitative results. Realist syntheses, narrative syntheses, meta-analyses, meta-syntheses and practice guidelines are all forms of synthesis. Resources related to synthesis are available.
Dissemination – Dissemination involves identifying the appropriate audience and tailoring the message and medium to the audience. Dissemination activities can include such things as summaries for / briefings to stakeholders, educational sessions with patients, practitioners and/or policy makers, engaging knowledge users in developing and executing dissemination/implementation plan, tools creation, and media engagement.
Exchange – The exchange of knowledge refers to the interaction between the knowledge user and the researcher, resulting in mutual learning. According to the Canadian Health Services Research Foundation (CHSRF), the definition of knowledge exchange is “collaborative problem-solving between researchers and decision makers that happens through linkage and exchange. Effective knowledge exchange involves interaction between knowledge users and researchers and results in mutual learning through the process of planning, producing, disseminating, and applying existing or new research in decision-making.”
Ethically-sound application of knowledge – Ethically-sound KT activities for improved health are those that are consistent with ethical principles and norms, social values, as well as legal and other regulatory frameworks – while keeping in mind that principles, values and laws can compete among and between each other at any given point in time. The term application is used to refer to the iterative process by which knowledge is put into practice.
In short, knowledge translation is about taking what we learn and know from evidence, sharing that knowledge with others and assisting them to make useful health choices in practice and policy through KT.
This often involves communicating across contexts, disciplines, and roles between and from scientists, clinicians, policy makers and to the public alike. In a health environment that is increasingly becoming complex, the ability to communicate across boundaries is no longer an advantage, it’s an essential skill. While we may not always have the right language, we can translate meaning through stories.
But if stories are to be effective they need to be valued.
The value of storytelling
I’ve seen health professionals — scientists and clinicians — roll their eyes when you mention storytelling in a work context. It is as if the only legitimate role for stories is to communicate with children (which University of Alberta researchers are exploring as a tool for sharing health knowledge with parents). Yet, it is through stories that most people share what they know in every other context; why would it be different in health?
Perhaps it is the connotation that stories are ‘made up’ like children’s bedtime tales, but one need only look to journalism to find that we’ve been making ‘stories’ a central part of our life every day. We listen to drive-time radio for stories about the traffic conditions, we watch, download and listen to news stories filed by professional journalists and citizen bloggers alike on mainstream media, Twitter, YouTube, Facebook along with myriad sources across the web. Last week we were glued to various sources to learn stories — some of them false — and create stories about the events of the Boston Marathon bombings.
Stories are what conveys multiple information threads and puts it in a coherent context.
Stories are coherence engines.
Valuing knowledge translation
If knowledge translation is important then it should be reflected in research priorities and evidence for its impact on the system across different disciplines. Dr Shannon Scott and her U of A team recently conducted a systematic review of knowledge translation strategies in the allied health professions and found that the field was full of low quality studies that made it impossible to make firm statements on which methods were best among them . That team has recently proposed a systematic review looking at how the arts and visual methods can further contribute to KT in practice, although it likely the same issue with methodological quality might come into play here, too.
What she and her team are doing is looking at the process of sharing stories and, from a research perspective, sharing stories appears to not have been worth investing in scientifically. At least, not enough to generate a lot of studies and good evidence.
One could argue that knowledge translation is still new and that it takes time to generate such evidence. That is partly true, but it is also an easy prop for those who want to avoid the messiness that comes with communication (and its problematic research context), learning from others, and creating more equitable information spaces, which is what knowledge translation ultimately does. Knowledge translation has also been in use for almost 20 years so in that time — even with the most dismal assessment of the length of time it takes to put knowledge into practice — we should be seeing some decent research published.
KT is fundamentally about sharing. Journalists’ are rewarded for sharing — the more they share and the more people who they share with (as measured by readers, listeners, viewers etc..) the more successful they are in their work. Teachers are rewarded for sharing because that means that they are teaching people. Librarians are rewarded for sharing because that means people are checking out books and using the resources in their library.
We don’t apply the same standard to academic research, even though we have some crude metrics to measure reach and impact, and there is roughly no metric for the degree to which clinicians share among themselves. Maybe this needs to change.
I have scientific colleagues who are fierce in the face of their most strident academic critics and have delivered keynotes to auditoriums filled with researchers that are nearly paralyzed in the face of speaking to the public. This is not fear of public speaking, its fear of speaking to the public.
Should they be? I don’t think speaking to the public should be expected to be enjoyable for everyone, but neither are doing statistical calculations, completing ethics applications, or presenting posters at conferences, but we still expect scientists to do that. We still expect nurses, doctors, psychologists, medical technicians and social workers to traverse complex social problems to talk to their patients in an open and honest way.
Why is it when scientists are speaking to policy makers, clinicians to scientists, policy makers to the public, or any professional to another from another discipline, speciality or division we decide its not critical for them to make the effort?
Why don’t we do the research to support it?
Why is it OK not to do KT because its uncomfortable, awkward, difficult or confusing?
Declining interest, rising demand
It is perhaps for reasons like this that knowledge translation is so poorly understood and taken up as a focus for research. Looking at Google NGram data (which tracks mention of specific topics in books and publications) we see a steady rise in citations until about 2003 followed by a levelling off. Keep in mind that the leveling begins before social media became known. In the years after Twitter, Facebook and YouTube — arguably the most powerful communications media we have for doing knowledge translation widely (but perhaps not deeply) — there is roughly no sharp increase.
Below are the citations for the terms knowledge translation, knowledge exchange, and knowledge integration from 1996 (when the Web first started gaining wide use beyond academia and the military) and 2008, the latest year for which there is available data. Note that the numbers reflect general mentions as a percentage of overall terms, so they are relative, not absolute values.
Figure 1: Google NGram Data for KT, KE & KI: 1996-2008
Is there so much other stuff to talk about in 2013 that the relative importance of knowledge translation is diminished?
A look at Google Trend data using the same terms finds that not only are these concepts not growing, their mention is actually shrinking.
Looking at the three terms we see that all three concepts have declined over time. During these years — 2004-2013 — we saw not only the birth of social media, but the rise of Internet-enabled handheld devices to allow knowledge to be shared anywhere there is a data signal. We now have apps and nearly all of the Internets resources in our pockets and we are seeing a decline in the use of these terms.
Figure 2: Google Trend Data for KT, KE & KI: 1996-2013
So to review: We have a body of evidence in KT that is problematic and incomplete at the same time we have a decrease in use of the terms, while at the very same time we have a sharp rise in available tools and technologies to share information quickly and a continued, steady demand for more information to make decisions for health providers, patients, policy makers and insurers.
Yes, the data presented here are not perfect. But does it not make sense that there should at least be some trend upward if knowledge translation is valued? Should we not see some shift to more research, better research evidence, and greater interest given the tools and scope of communications we have through social media?
This begs the question: is knowledge translation in health too important to leave to health professionals?
In future posts this question will be looked at in greater depth. Stay tuned.
* Blog has been updated since original post
Earlier this week I has the pleasure of attending talks from Bryan Boyer from the Helsinki Design Lab and learning about the remarkable work they are doing in applying design to government and community life in Finland. While the focus of the audience for the talks was on their application of design thinking, I found myself drawn to the issue of evaluation and the discussion around that when it came up.
One of the points raised was that design teams are often working with constraints that emphasize the designed product, rather than its extended outcome, making evaluation a challenge to adequately resource. Evaluation is not a term that frequents discussion on design, but as the moderator of one talk suggested, maybe it should.
I can’t agree more.
Design and Evaluation: A Natural Partnership
It has puzzled me to no end that we have these emergent fields of practice aimed at social good – social finance and social impact investing, social innovation, social benefit (PDF)– that have little built into their culture to assess what kind of influence they are having beyond the basics. Yet, social innovation is rarely about simple basics, it’s influence is likely far larger, for better or worse.
What is the impact being invested in? What is the new thing being created of value? and what is the benefit and for whom? What else happened because we intervened?
Evaluation is often the last thing to go into a program budget (along with knowledge translation and exchange activities) and the first thing to get cut (along with the aforementioned KTE work) when things go wrong or budgets get tightened. Regrettably, our desire to act supersedes our desire to understand the implication of those actions. It is based on a fundamental idea that we know what we are doing and can predict its outcomes.
Yet, with social innovation, we are often doing things for the first time, or combining known elements into an unknown corpus, or repurposing existing knowledge/skills/tools into new settings and situations. This is the innovation part. Novelty is pervasive and with that comes opportunities for learning as well as the potential for us to good as well as harm.
An Ethical Imperative?
There are reasons beyond product quality and accountability that one should take evaluation and strategic design for social innovation seriously.
Design thinking involves embracing failure (e.g, fail often to succeed sooner is the mantra espoused by product design firm IDEO) as a means of testing ideas and prototyping possible outcomes to generate an ideal fit. This is ideal for ideas and products that can be isolated from their environment safely to measure the variables associated with outcomes, if considered. This works well with benign issues, but can get more problematic when such interventions are aimed at the social sphere.
Unlike technological failures in the lab, innovations involving people do have costs. Clinical intervention trials go through a series of phases — preclinical through five stages to post-testing — to test their impact, gradually and cautiously scaling up with detailed data collection and analysis accompanying each step and its still not perfect. Medical reporter Julia Belluz and I recently discussed this issue with students at the University of Toronto as part of a workshop on evidence and noted that as complexity increases with the subject matter, the ability to rely on controlled studies decreases.
Complexity is typically the space where much of social innovation inhabits.
As the social realm — our communities, organizations and even global enterprises — is our lab, our interventions impact people ‘out of the gate’ and because this occurs in an inherently a complex environment, I argue that the imperative to evaluate and share what is known about what we produce is critical if we are to innovate safely as well as effectively. Alas, we are far from that in social innovation.
Barriers and Opportunities for Evaluation-powered Social Innovation
There are a series of issues that permeate through the social innovation sector in its current form that require addressing if we are to better understand our impact.
- Becoming more than “the ideas people”: I heard this phrased used at Bryan Boyer’s talk hosted by the Social Innovation Generation group at MaRS. The moderator for the talk commented on how she had wished she’d taken more interest in statistics in university because they would have helped in assessing some of the impact fo the work done in social innovation. There is a strong push for ideas in social innovation, but perhaps we should also include those that know how to make sense and evaluate those ideas in our stable of talent and required skillsets for design teams.
- Guiding Theories & Methods: Having good ideas is one thing, implementing them is another. But tying them both together is the role of theory and models. Theories are hypotheses about the way things happen based on evidence, experience, and imagination. Strategic designers and social innovators rarely refer to theory in their presentations or work. I have little doubt that there are some theories being used by these designers, but they are implicit, not explicit, thus remaining unevaluable and untestable or challenged by others. Some, like Frances Westley, have made theories guiding her work explicit, but this is a rarity. Social theory, behaviour change models and theories of discovery beyond just use of Rogers’ Diffusion of Innovation theory must be introduced to our work if we are to make better judgements about social innovation programs and assess their impact. Indeed, we need the kind of scholarship that applies theory and builds it as part of the culture of social innovation.
- Problem scope and methodological challenges with it. Scoping social innovation is immensely wide and complicated task requiring methods and tools that go beyond simple regression models or observational techniques. Evaluators working social innovation require a high-level understanding of diverse methods and I would argue cannot be comfortable in only one tradition of methods unless they are part of a diverse team of evaluation professionals, something that is costly and resource intensive. Those working in social innovation need to live the very credo of constant innovation in methods, tools and mindsets if they are to be effective at managing the changing conditions in social innovation and strategic design. This is not a field for the methodologically disinterested.
- Low attendance to rigor and documentation. When social innovators and strategic designers do assess impact, too often there is a low attention to methodological rigor. Ethnographies are presented with little attention to sampling and selection or data combination, statistics are used sparingly, and connections to theory or historical precedent are absent. Of course, there are exceptions, but this is hardly the rule. Building a culture of innovation within the field relies on the ability to take quality information from one context and apply it to another critically and if that information is absent, incomplete or of poor quality the possibility for effective communication between projects and settings diminishes.
- Knowledge translation in social innovation. There are few fora to share what we know in the kind of depth that is necessary to advance deep understanding of social innovation, regularly. There are a lot of one-off events, but few regular conferences or societies where social innovation is discussed and shared systematically. Design conferences tend towards the ‘sage on the stage’ model that favours high profile speakers and agencies, while academic conferences favour research that is less applied or action-oriented. Couple that with the problem of client-consultant work that is common in social innovation areas and we get knowledge that is protected, privileged or often there is little incentive to add a KT component to the budget.
- Poor cataloguing of research. To the last point, we have no formalized methods of determining the state-of-the-art in social innovation as research and practice is not catalogued. Groups like the Helsinki Design Lab and Social Innovation Generation with their vigorous attention to dissemination are the exception, not the rule. Complicating matters is the interdisciplinary nature of social innovation. Where does one search for social innovation knowledge? What are the keywords? Innovation is not a good one (too general), yet neither is the more specialized disciplinary terms like economics, psychology, geography, engineering, finance, enterprise, or health. Without a shared nomenclature and networks to develop such a project the knowledge that is made public is often left to the realm of unknown unknowns.
Moving forward, the challenge for social innovation is to find ways to make what it does more accessible to those beyond its current field of practice. Evaluation is one way to do this, but in pursuing such a course, the field needs to create space for evaluation to take place. Interestingly, FSG and the Center for Evaluation Innovation in the U.S. recently delivered a webinar on evaluating social innovation with the principle focus being on developmental evaluation, something I’ve written about at length.
Developmental evaluation is one approach, but as noted in the webinar : an organization needs to be a learning organization for this approach to work.
The question that I am left with is: is social innovation serious about social impact? If it is, how will it know it achieved it without evaluation?
And to echo my previous post: if we believe learning is essential to strategic design we must ask: How serious are we about learning?
Tough questions, but the answers might illuminate the way forward to understanding social impact in social innovation.
* Photo credit from Deviant Art innovation_by_genlau.jpg used under Creative Commons Licence.
Networking is as big of a buzzword as you can find these days for good reasons: networks solve a lot of problems by connecting people together and leveraging the knowledge of many for social benefit (and sometimes not). Simply put: networks allow us to do more.
But more can also be a problem.
It is not just that there is a lot of information out there, which creates its own set of problems, its that there is also so much to DO with this information. With all of the data streaming at us from social networks like Twitter, Facebook, Google Buzz, LinkedIn, Ning, and the myriad other ways in which social media allows us to connect it is hard to stay on top of it all. Even with good filters, the wealth of information available on even very narrow topics can be remarkable. I find this creates a temptation to try and get to it all. How often have you heard people lament about not being able to catch up on all of their blogs, tweets, magazine articles and beyond — let alone your conversations with friends, colleagues and loved ones?
While sociologist Mark Granovetter’s concept of the ‘strength of weak ties’ has been promoted vigorously in the social sciences and business to justify the potential for social networking, the value of the concept has some clear limitations that often get dismissed in the hype. One of the risks is that the time and energy it takes to invest in social networks broadly can take you away from creating strong ties. Think of how we socialized a generation ago: we had a close set of friends and family and associates, a few pen and phone pals, and some who we saw at occasional events like family reunions and conferences. Now, we “see” them daily, maybe hourly. That creates a lot of encounters, but at a superficial level for the most part.
While this is good for some things, particularly like getting simple messages out quickly, it is a problem for dealing with complex information or messages with multiple layers and potential meanings. Those require a little depth of contact that many networking tools or “networking events” don’t encourage.For those kinds of complex problems, we need tighter bonds and more meaning-making opportunities in our networks.
Mario Luis Small from the University of Chicago has explored the role of social networks and how they benefit those with little social capital. His recent book, Unanticipated Gains: Orgins of network inequality in everyday life, looked in depth at how social capital could be grown with a community of low-income, New York City mothers:
Social capital theorists have shown that some people do better than others in part because they enjoy larger, more supportive, or otherwise more useful networks. But why do some people have better networks than others? Unanticipated Gains argues that the answer lies less in people’s deliberate “networking” than in the institutional conditions of the churches, colleges, firms, gyms, childcare centers, schools, and other organizations in which they happen to participate routinely. The book illustrates and develops this argument by exploring the experiences of New York City mothers whose children were enrolled in childcare centers.
Unanticipated Gains examines why scores of these mothers, after enrolling their children in centers, dramatically expanded both the size and usefulness of their personal networks, often in ways they did not expect. Whether, how, and how much the mothers’ networks were altered—and how useful these networks were—depended on the apparently trivial but remarkably consequential practices and regulations of the centers, from the structure of their PTOs, to the regularity of their fieldtrips to amusement parks and zoos, to their ostensibly innocuous rules regarding pick-up and drop-off times.
Relying on scores of in-depth interviews with mothers, quantitative data on both mothers and centers, and detailed case studies of other routine organizations (from beauty salons and bath houses to colleges and churches), Unanticiapted Gains shows that how much people gain from their connections depends substantially on institutional conditions they often do not control, and through everyday process they may not even be aware of. (Original post here)
Although Small was not intending to write about what makes a network ‘sticky’, to use Gabriel Szulanski‘s term, he winds up with a set of recommendations that do just that. Indeed, Small’s suggestions – create intimate, cooperative, active, stable, yet flexible and adaptive networks — make networks sticky (and resilient) while mitigating the effects of creating widening gaps between the well-connected and capital-rich and the rest.
Small suggests that there are 7 ingredients that help dampen harmful, unintended consequences of networks:
1. Create frequent opportunities for interaction;
2. Ensure frequent and regular interactions between agents;
3. Interactions must be long lasting and exist beyond simple, quick exchanges;
4. Interactions are minimally competitive;
5. Interactions are maximally cooperative;
6. Intrinsic motivation consistent with that of the organizations or networks drive interactions and encourage engagement over time;
7. Extrinsic motivators must also be present to support the maintenance of ties over time.
Perhaps it is time to consider employing some design thinking towards creating stickier, rather than bigger or broader networks.
For more reading on the phenomenon of “stickiness”, consider the following:
Szulanski, G. (2000). The Process of Knowledge Transfer: A Diachronic Analysis of Stickiness. Organizational Behavior and Human Decision Processes.
Szulanski, G. (2003). Sticky knowledge: Barriers to knowing in the firm. London: Sage.
Szulanski, G., & Jensen, R. (2004). Overcoming stickiness: An empirical investigation of the role of the template in the replication of organizational routines. Managerial and Decision Economics, 25(67), 347-363.