If a health scare manifested itself in the world and there were no journalists to cover the story, what would the impact on the public be?
That is a question that lingered with me throughout the start of the 2013 Ontario Public Health Convention (TOPHC) which began with a morning dedicated to improving public health communication. Opening up the conference was a series of linked keynote presentations from a risk communications researcher (Julie Leask); a former newspaper editor, journalism professor and social media advocate (Wayne MacPhail), and one of Canada’s leading health specialist reporters (Helen Branswell).
The Academic’s Perspective
Keynote speaker Julie Leask (pictured above) and her colleague Dr. Claire Hooker (a good friend of mine) have been looking at the ways journalists engage in risk communication with the public on matters of public health from immunization to SARS to understanding the health priorities of professionals. In 2010 they published a paper looking at how the media covers health topics and argued that the health professions need to be aware of how stories are made, communicated and to be an active partner with reporters if they are to have positive impact in moments of health scares.
“It’s too late when the crisis comes up” – Julie Leask speaking on the need for public health to get engaged with the public using social media
In a previous post I wrote about how journalism is the fourth estate of medicine and public health. Journalists are the storytellers that the public listen to and are charged with looking at a problem from many perspectives to develop that coherent narrative that speaks to their audience. These are qualities that most scientists and public health professionals don’t bring to their jobs, nor are they always expected to or even should. As such, journalists play an important role for this very reason.
Nonetheless, the health sector has an uneasy relationship with journalism. Health professionals – particularly researchers — poorly understand the world of journalists and sometimes view the profession with suspicion. Julie Leask and her colleagues have found this to be the case, but argue that it is no reason to shy away from engaging the public using the tools that are comfortable to journalists. She spoke to the invaluable role of specialist health journalists in acting not only as producers of high quality health content in the news, but also guardians against low quality content making into press. In speaking to her research, she pointed out that specialist health journalists help educate their peers and editors on health issues, which are often complex and require more than a passing understanding of context to communicate well, as key gatekeepers for quality in the health landscape.
The Editor’s Perspective
To this end, Wayne MacPhail, a former editor of the Hamilton Spectator, argued that public health has a near ethical imperative (my choice of term) to be in the social media space to not only promote good health, but counter and challenge myths and misinformation. This isn’t some naive pronouncement that we’ll eliminate the snake oil sales or quackery that proliferates in the public sphere and media, but rather a simple observation that we have no chance of making impact if we are not even engaged in the space at all.
Like Leask, MacPhail says that it’s too late to engage the public when a health crisis comes up and that public health needs to be in the conversation stream before that happens.
The Reporter’s Perspective
Helen Branswell, a reporter from The Canadian Press, rounded out the panel and spoke frankly about the dwindling resources and rapidly changing landscape in journalism. She was on the front lines of reporting the 2003 SARS outbreak and showed a picture taken during that time of an empty newsroom and remarked how that the scene is the same now only for different reasons (limited budgets due to decreased ad revenue and the related shift to digital information on the web being two such reasons, among others).
Branswell paints a bleak picture of the present and future in many areas of health journalism. Stories are increasingly being covered by general reporters who may treat the story the same as they would a traffic incident, political story, or crime; journalists who are unlikely to know the context and details that are critical to communicating the nuances present in health matters. Interns are replacing some full time or veteran reporters in the newsroom and there are only a handful of specialists in practice.
Pressures from time, budget and competing interests in the newsroom are all contributing to an environment where quality health reporting is threatened.
I asked the panel what they thought public health should do to ensure that the healthy stories are reported well and there was little answers. Helen Branswell said, truthfully and somewhat cheekily: “buy newspapers”. She reminded us that we should be paying for the quality content and supporting good journalism in practice if we want it to survive, which is hard to argue against.
But that alone will not do all the work needed to preserve good journalism. I spoke to another conference attendee, a formally trained journalist who is now working with a research firm, about the ways in which journalists have helped other organizations craft their messages and engaging the public citing the Calgary Police Service’s social media team as an example. This pointed to ways in which journalists can make a difference in matters of public health and social services.
Yet, what about investigative journalism? What about the potential conflicts that come from being paid to report on issues that might be critical of the organization who does the paying (e.g., Ministries of Health, Departments of Public Health, Universities and colleges etc..)? This model doesn’t solve that, but it is at least another option.
Yet, the examples from public health taking this challenge of working with journalists up are few. Many still believe that social media is another means of broadcasting, which misses the mark. Others still view social media, journalism, engaging with the public through the media, with suspicion on the grounds that much of the work out there is not evidence based.
But what evidence did we have when SARS hit us 10 years ago? We had lots of epidemiological data on infectious disease, but that was only part of the story. Many of the leading health scientists were adapting their models, creating new ones and only after the disease left did we really have a full sense of what happened. We learned as we went.
This is what social media is all about, too. The lessons from major health events — disasters, outbreaks, and pandemics — parallel social media. It is innovation space at its clearest and thus there is an imperative to view it as innovation space with the tools and lenses that best support movement within complex adaptive system. From a communications standpoint, social media and the tools of modern journalism (and the style of communication they employ) are one thing to consider. Developmental design and evaluation are also among these tools combined with systems thinking.
Linear thinking and action will not work in a complex system and as this panel pointed out, there is much reason to be concerned if we are not prepared to communicate and support those that communicate well in such times when — not if — they come back.
Ten years after SARS how better off are we? And if we are better, how are we communicating that to the public?
Journalists occupy an important, yet often unacknowledged, role in the health system by providing a dispassionate account of the system’s strengths, weaknesses, and opportunities to the public. It is through journalists that much of the research we scientists and practitioners produce gets communicated to the audiences likely to use them. This fourth estate is also a place where hard questions can be asked and answered, holding governments, business and the health system itself to account because journalists operate apart from this space, unlike scientists and clinicians. We are at risk of losing this and it’s time to consider what that means for our collective health and wellbeing.
The news business is going through a massive upheaval, part of a larger overall disruption in media. Many newspapers are reducing the size of their print offerings, publishing less frequently or ceasing operations altogether.
This reduction in the capacity and size of the fourth estate begs two simple questions: Who will hold health scientists, clinicians, pharmaceutical companies, health product manufacturers, and policy makers to account? and who will tell the stories of science, health and medicine in public?
While we have some activist academics doing great work on influencing broader audiences like policy makers, they are exceptions not the norm. Stanton Glantz, a major tobacco control champion from UCSF who taken to blogging as a means of communicating to professionals and the public directly, is one of these such people. But Stan is atypical and holds a tenured position at a major university, something he’s acknowledged protected him when pursuing issues of evidence withholding from the tobacco companies in the 1990′s and beyond. Many faculty (particularly younger ones) are not this secure and even fewer independently funded scientists are. Academia is changing and not in ways that favour security and stability, which has implications for the kind of stories that get told.
Journalists have traditionally relied on protection from their publisher or producer under the name of journalistic freedom (the fourth estate) as a key pillar of their profession. It’s hard to imagine the Watergate scandal coming to light had Bob Woodward, Carl Bernstein and the other reporters working for the Washington Post, Time Magazine and New York Times not had the resources, stability and support provided by their newspapers . But what happens when these resources are no longer available or there are no institutions to support journalists in serving as watchdogs to hold people or institutions to account for what they do and don’t do?
Are ‘Monkeys in Coats’ A Healthy Story?
It’s been suggested that the Internet will take care of this. Citizen journalists, armed with camera-laden handsets connected to social media will fill the news gap. For example, it was citizens, not journalists, who first captured the story of Darwin the monkey, dressed in a shearling coat, walking around an Ikea parking lot in Toronto that went viral on a global scale on December 10, 2012. This is great for those interested in simian fashions and retail adventures, but the reason it was captured was because the story was obvious and in the face (or at the ankles) of those who told it. (For those of you not familiar with Toronto, coat-wearing monkeys are not typically seen at shopping centres or anywhere around town for that matter.)
Health and medicine is not the same as monkeys wearing coats (no matter what kind of joke you want to make). There is nuance, debate and reason that requires sustained attention and focus that someone with an iPhone and Twitter account is less likely to convey. Reasoned arguments for citizen journalism’s potential suggest it can complement the work of traditional journalism, not replace it. Yet, is this belief in one form (citizen journalism) undermining support for the other (traditional journalism) and serving as a fix that ultimately fails? If free-and-easy content is available, how likely are publishers willing to pay for professional work? Particularly if the choice of stories of one group (e.g., monkeys in coats) are more likely to garner the kind of attention that drives advertising than that of another (e.g., health care financing). Only one of these stories will impact our collective health.
Why does this matter? Trained journalists are required to be good communicators to a broad audience, scientists are not. Clinicians are slightly better, but decades of research has shown it is still highly problematic across areas of practice. This will not be solved overnight, if at all. Scientists and clinicians have told me they are already burdened with enough job expectations and adding knowledge translation skills to that list is asking too much.
As I have argued previously, there is a valued place for synthetics in research: those are who are good at taking ideas and weaving them together into an accessible narrative. Journalists are ideally suited to play or support this role. They do the job that many scientists can’t or won’t do and have better to tools, skills and strategies to do it. They write in a style that is suited to broad audiences in a way that suit those audiences’ needs, not what funders, disciplinary traditions, universities, or scientific peers demand (without evidence that those methods of communication are effective). There are reasons why journalists assess the reach of their work in the thousands and social scientists in the dozens (by citations in their field of practice).
Going Deeper to See Clearer
Although we have more information about health available to us than ever before, this may not be healthy for patients. The potential for those uninformed about medical diagnostics, evidence, and the nature of health itself to make poor choices based on incomplete, incorrect or overwhelming information is high. Further, without the kind of dispassionate examination of evidence in a synthetic manner that is tied to the way in which that evidence is expressed in the world through public opinion, policy making and healthcare practices, we lose a major accountability mechanism and means of informing public discourse.
In October I co-delivered a workshop on health evidence for students at the University of Toronto with the 2012 Hancock Lecturer and journalist Julia Belluz. Julia writes the Science-ish blog for Macleans Magazine and is an Associate Editor with the Medical Post. Julia`s lecture was on the role that social media plays in our health system and how its power to leverage the attention of the masses — for good and ill — is shaping the public understanding of health and medicine often in the absence of evidence for effects of conditions, processes, and practice. The lecture is summarized online on Science-ish beginning here.
Reading through the lecture notes one sees a depth of study that would be unlikely to be found anywhere within the formal health system. The reasons are that it blends evidence with commentary, observation with carefully selected sources, and takes a perspective that seeks to inform a wide, not narrow audience in both practical and intellectually stimulating ways. Taken together, this is a collection of activities that are not within the scope of practice for scientists and practitioners. There are reasons why the greatest contributors to public discourse on many scientific issues has come from journalists, not the scientists who generate the research. They tell the story better.
Malcolm Gladwell, Steven Johnson, Mitch Waldrop, Julia Belluz, Andre Picard and others are a big part of the reasons most of the those who vote to support funding of science, who donate to research-related causes, and fight for policies to keep us healthy know of the research that backs those ideas up.
Imperfect as journalism is, it serves the public when done with integrity. It’s worth spending some time considering what can be done to support the fourth estate so it supports us.
Photo credit: DBduo Photography on Flickr used under Creative Commons Licence.
Earlier this week I has the pleasure of attending talks from Bryan Boyer from the Helsinki Design Lab and learning about the remarkable work they are doing in applying design to government and community life in Finland. While the focus of the audience for the talks was on their application of design thinking, I found myself drawn to the issue of evaluation and the discussion around that when it came up.
One of the points raised was that design teams are often working with constraints that emphasize the designed product, rather than its extended outcome, making evaluation a challenge to adequately resource. Evaluation is not a term that frequents discussion on design, but as the moderator of one talk suggested, maybe it should.
I can’t agree more.
Design and Evaluation: A Natural Partnership
It has puzzled me to no end that we have these emergent fields of practice aimed at social good – social finance and social impact investing, social innovation, social benefit (PDF)– that have little built into their culture to assess what kind of influence they are having beyond the basics. Yet, social innovation is rarely about simple basics, it’s influence is likely far larger, for better or worse.
What is the impact being invested in? What is the new thing being created of value? and what is the benefit and for whom? What else happened because we intervened?
Evaluation is often the last thing to go into a program budget (along with knowledge translation and exchange activities) and the first thing to get cut (along with the aforementioned KTE work) when things go wrong or budgets get tightened. Regrettably, our desire to act supersedes our desire to understand the implication of those actions. It is based on a fundamental idea that we know what we are doing and can predict its outcomes.
Yet, with social innovation, we are often doing things for the first time, or combining known elements into an unknown corpus, or repurposing existing knowledge/skills/tools into new settings and situations. This is the innovation part. Novelty is pervasive and with that comes opportunities for learning as well as the potential for us to good as well as harm.
An Ethical Imperative?
There are reasons beyond product quality and accountability that one should take evaluation and strategic design for social innovation seriously.
Design thinking involves embracing failure (e.g, fail often to succeed sooner is the mantra espoused by product design firm IDEO) as a means of testing ideas and prototyping possible outcomes to generate an ideal fit. This is ideal for ideas and products that can be isolated from their environment safely to measure the variables associated with outcomes, if considered. This works well with benign issues, but can get more problematic when such interventions are aimed at the social sphere.
Unlike technological failures in the lab, innovations involving people do have costs. Clinical intervention trials go through a series of phases — preclinical through five stages to post-testing — to test their impact, gradually and cautiously scaling up with detailed data collection and analysis accompanying each step and its still not perfect. Medical reporter Julia Belluz and I recently discussed this issue with students at the University of Toronto as part of a workshop on evidence and noted that as complexity increases with the subject matter, the ability to rely on controlled studies decreases.
Complexity is typically the space where much of social innovation inhabits.
As the social realm — our communities, organizations and even global enterprises — is our lab, our interventions impact people ‘out of the gate’ and because this occurs in an inherently a complex environment, I argue that the imperative to evaluate and share what is known about what we produce is critical if we are to innovate safely as well as effectively. Alas, we are far from that in social innovation.
Barriers and Opportunities for Evaluation-powered Social Innovation
There are a series of issues that permeate through the social innovation sector in its current form that require addressing if we are to better understand our impact.
- Becoming more than “the ideas people”: I heard this phrased used at Bryan Boyer’s talk hosted by the Social Innovation Generation group at MaRS. The moderator for the talk commented on how she had wished she’d taken more interest in statistics in university because they would have helped in assessing some of the impact fo the work done in social innovation. There is a strong push for ideas in social innovation, but perhaps we should also include those that know how to make sense and evaluate those ideas in our stable of talent and required skillsets for design teams.
- Guiding Theories & Methods: Having good ideas is one thing, implementing them is another. But tying them both together is the role of theory and models. Theories are hypotheses about the way things happen based on evidence, experience, and imagination. Strategic designers and social innovators rarely refer to theory in their presentations or work. I have little doubt that there are some theories being used by these designers, but they are implicit, not explicit, thus remaining unevaluable and untestable or challenged by others. Some, like Frances Westley, have made theories guiding her work explicit, but this is a rarity. Social theory, behaviour change models and theories of discovery beyond just use of Rogers’ Diffusion of Innovation theory must be introduced to our work if we are to make better judgements about social innovation programs and assess their impact. Indeed, we need the kind of scholarship that applies theory and builds it as part of the culture of social innovation.
- Problem scope and methodological challenges with it. Scoping social innovation is immensely wide and complicated task requiring methods and tools that go beyond simple regression models or observational techniques. Evaluators working social innovation require a high-level understanding of diverse methods and I would argue cannot be comfortable in only one tradition of methods unless they are part of a diverse team of evaluation professionals, something that is costly and resource intensive. Those working in social innovation need to live the very credo of constant innovation in methods, tools and mindsets if they are to be effective at managing the changing conditions in social innovation and strategic design. This is not a field for the methodologically disinterested.
- Low attendance to rigor and documentation. When social innovators and strategic designers do assess impact, too often there is a low attention to methodological rigor. Ethnographies are presented with little attention to sampling and selection or data combination, statistics are used sparingly, and connections to theory or historical precedent are absent. Of course, there are exceptions, but this is hardly the rule. Building a culture of innovation within the field relies on the ability to take quality information from one context and apply it to another critically and if that information is absent, incomplete or of poor quality the possibility for effective communication between projects and settings diminishes.
- Knowledge translation in social innovation. There are few fora to share what we know in the kind of depth that is necessary to advance deep understanding of social innovation, regularly. There are a lot of one-off events, but few regular conferences or societies where social innovation is discussed and shared systematically. Design conferences tend towards the ‘sage on the stage’ model that favours high profile speakers and agencies, while academic conferences favour research that is less applied or action-oriented. Couple that with the problem of client-consultant work that is common in social innovation areas and we get knowledge that is protected, privileged or often there is little incentive to add a KT component to the budget.
- Poor cataloguing of research. To the last point, we have no formalized methods of determining the state-of-the-art in social innovation as research and practice is not catalogued. Groups like the Helsinki Design Lab and Social Innovation Generation with their vigorous attention to dissemination are the exception, not the rule. Complicating matters is the interdisciplinary nature of social innovation. Where does one search for social innovation knowledge? What are the keywords? Innovation is not a good one (too general), yet neither is the more specialized disciplinary terms like economics, psychology, geography, engineering, finance, enterprise, or health. Without a shared nomenclature and networks to develop such a project the knowledge that is made public is often left to the realm of unknown unknowns.
Moving forward, the challenge for social innovation is to find ways to make what it does more accessible to those beyond its current field of practice. Evaluation is one way to do this, but in pursuing such a course, the field needs to create space for evaluation to take place. Interestingly, FSG and the Center for Evaluation Innovation in the U.S. recently delivered a webinar on evaluating social innovation with the principle focus being on developmental evaluation, something I’ve written about at length.
Developmental evaluation is one approach, but as noted in the webinar : an organization needs to be a learning organization for this approach to work.
The question that I am left with is: is social innovation serious about social impact? If it is, how will it know it achieved it without evaluation?
And to echo my previous post: if we believe learning is essential to strategic design we must ask: How serious are we about learning?
Tough questions, but the answers might illuminate the way forward to understanding social impact in social innovation.
* Photo credit from Deviant Art innovation_by_genlau.jpg used under Creative Commons Licence.
Human service providers live in paradox by stressing best practices while wishing for some recognition of the unique qualities that they bring. Can being unique be the standard we need?
Recently, I was with a colleague discussing the challenges that universities are facing with respect to the twin issues of curtailing costs and expanding revenues. Standing out is getting to be more important than ever.
What captured our attention was the great irony that we are at a time when there is such a great drive to standardize when the marketplace is so crowded. The very competitive advantage that comes with being unique is being subsumed under pressure to be like everyone else.
In education, health sciences, and social and human services we are seeing this remarkable drive towards things such as accreditation of programs, implementation of best practices throughout the curriculum, and certification of activities that once did not require it. The underpinning logic is that having some kind of benchmark and standard that all programs or activities of a similar nature can adhere to will ensure that there is sufficient quality for the consumer of such programs.
While there are certainly activities where standards and consistency work well (for example, I don’t really want the pilot of my next flight expressing his creativity through the way he maneuvers the airplane), these standards or best practices are based on a view that the world is relatively simple, or a least principally ordered. The Cynefin Framework can help identify certain conditions where these type of systems thrive.
However, in fields like health promotion, community psychology, the arts, much of social work, and those working in the open systems found in communities and many organizations, order is a fleeting condition and unstable.
To be good at what you do in these domains, whether it is hosting a design charrette or doing community-based research, one has to pay considerable attention to subtle difference and adjust a strategy accordingly. This customization of activity — which does not require a wholesale abandonment of what has been done before - are the very things that signify empathy, create connection, and represent a true mark of competence in this domain. Being different across settings and conditions — standing out — is what makes a service outstanding.
This tension or paradox is something that could benefit from much more exploration and might serve as something for universities and professions to consider taking seriously. Ironically, the establishment of much of what “best practice” seeks to achieve might be accomplished through celebrating difference.
In the social media and marketing world there is a concept called “brand democratization”, which refers to the notion of having your customers contribute to and partly shape a brand’s identity. Marty Neumeier, who has written extensively on branding, asserts that a brand isn’t what the producers of the product say it is, but rather what the customers say it is.
The notion that the meaning and identity of a brand is outside of the control of those that develop a product is unsettling for many in business. Social media only amplifies this feeling. A social media consultant I work with once relayed a story about how a client (a big company) once asked how they could use social media to control their message more effectively and how unsettled they were to hear that “control” is not what social media is about and frustrated at the thought that they were no longer in the driver’s seat.
To some extent, we are seeing the same thing take place in the health sector, particularly in areas of great complexity. As complex chronic conditions become more prevalent and the number of people with multiple chronic conditions represent a larger proportion of the population (a near inevitability in societies with aging demographics and better health care), complexity will gain a greater place in our health care policy discourse.
Evidence-based medicine holds very clearly a sense of what is good and not-so-good quality content. Hierarchies abound and medical students are taught early on about what the best evidence is and how it is generated. The problem with the “best evidence” in health care is that it typically comes from studies focused within a rather narrow contextual band of activity using designs that aim for simplicity, not complexity. In standard research programs the aim is to reduce variation, not embrace it.
This is nearly the opposite of complex systems, where variation is, to a point, the key to learning and adaptation. Making sense within a complex system requires deep appreciation of context and an understanding of one’s position within the system. However, because such positions are relative, the ability to hold “expertise” in a system is limited. Indeed, there are many who occupy positions where they have great knowledge and the ability to re-interpret and generate evidence as opposed to only a few.
Does this create evidence democratization? When there is so much information from many sources, a need for contextualized feedback and many actors with useful experience and knowledge occupying different positions related to a problem, it seems that the answer is “yes”.
Social media operates within an environment of complexity due to the widescale involvement of diverse actors working in a coordinated, yet sell-organized manner, across many boundaries within multiple overlapping contexts. What is “true” within a social media sphere is only made so by the application of that knowledge within a particular context. A blog only has meaning to its creator and her or his audience, not the entire system. Yet, the actors engaged with that content may take it and rework it, retweet it, or post it in a new environment, where it is transformed into something else. If that “something else” has value, it lives onward.
In health contexts, that value might be very different depending on the condition and the manner in which that knowledge is applied. From a traditional evidence-based health approach, this is utterly terrifying. How can evidence be re-interpreted? What are the harms associated with taking something from one context and applying them to another for which it wasn’t designed? These are concerns, yet it is hard to imagine that there isn’t also some potential value in exploring ways in which this plays out in chronic and complex conditions given that the evidence is rarely that “hard” going in.
Are we on the cusp of a new wave of evidence democratization? And if so, are we going to be healthier as a result of it? More innovative? Or in deep trouble?
It’s not everyday that you see complexity science propped up in big type in a major newspaper like the New York Times, but a couple weeks ago in that’s what happened. The article, titled: “It’s Complicated — Making Sense of Complexity” is enough to get a professor like me all giddy.
In the piece, authored by frequent contributor David Segal, took a light look at the distinction between complicated and complex situations, something I’ve discussed in other posts, pointing out that the confusion between these two concepts often leads us to trouble. To illustrate this, Segal quotes Dr. Brenda Zimmerman from York University’s Schulich School of Business, one of the leading proponents of complexity science in social systems like organizations. Dr. Zimmerman states:
What we need, suggests Brenda Zimmerman, a professor at Schulich School of Business in Ontario, is a distinction between the complicated and the complex. It’s complicated, she says, to send a rocket to the moon — it requires blueprints, math and a lot of carefully calibrated hardware and expertly written software. Raising a child, on the other hand, is complex. It is an enormous challenge, but math and blueprints won’t help. Performing hip replacement surgery, she says, is complicated. It takes well-trained personnel, precision and carefully calibrated equipment. Running a health care system, on the other hand, is complex. It’s filled with thousands of parts and players, all of whom must act within a fluid, unpredictable environment. To run a system that is complex, it’s not enough to get the right people and the ideal equipment. It takes a set of simple principles that guide and shape the system. For instance: Teach everyone the best practices of doctors who are really good at hip replacement surgery.
So here’s a leading proponent of complexity science, providing clear examples of the distinctions between complex and complicated in a major newspaper, and finishing it off with a health example. What more could I want?
Yet, I was disappointed by this piece, and particularly the examples described above. The reason was not because they were wrong or inaccurate per se, but rather they are well-worn (to complexity scientists) to the point of being a ‘pat response’ and it is that feeling that stirs concern.
The idea of the simple principles concept comes from work done on ‘Boids‘ and other simulations of complex systems where lots of activities happen simultaneously to produce order out of a situation that is ripe for chaos. Research on flocking or swarming behaviour shows that, despite the volume of actors in the system (like a flock of sparrows), a few simple rules can guide complex behaviour, almost reign it in. With birds flocks, rules such as:
1. Stay equidistant between the closest other members of the flock;
2. Avoid hitting other objects, but keep moving;
3. Steer towards the average position of your flock mates
This produces something that researchers believe approximates the behaviour simulated in the video below:
This is a theory that has been explored with many species and been found to be robust enough to warrant serious consideration. The problem comes when we take these same principles and apply them to human systems. Unlike birds or fish, we are actually quite horrible at following rules, even the most simple of them. It is for this reason that many best practice efforts in health cease to gain widespread adoption. And while there is a movement afoot to use simple rules like checklists to guide certain behaviour in health and other fields, the cases in which these checklists work like surgery are ones that are complicated, not complex.
Human and social systems might be described as ultracomplex because they are governed by chaos, complexity, complexity, simplicity at the same time (for those interested in the relationship between these, I’d recommend studying the Cynefin Framework developed by Dave Snowden and the folk at Cognitive Edge). We humans have rules, but we apply them indiscriminately and consistently depending on the context — which includes person, place and time. Unlike the bees that will create elaborate hive behaviour that resonates with complex systems, bees don’t worry about their self-esteem, tend not to create elaborate myths to guide their collective actions, or empathize with the plight of other insects. Humans ability to self-reflect, to engage in metacognition, empathize, and morally reason makes them different from other natural phenomena, making them problematic for applying the rules of complexity to without some considerable reservations or contextual binding.
I write this as an avid believer in the potential applicability of the laws and rules of complexity from the natural world to the human one, but one also troubled by how quickly we systems scholars apply these concepts without deeper thought on the theoretical and empirical problems that they pose in transferring evidence from domain to domain. There is relatively little good, quality evidence on the use of complexity science as a guiding framework for human action based on studies with humans — insofar as other bodies of evidence are available on similarly tricky subjects. And yet, conceptually, complexity-based concepts like emergence, sensitivity to initial conditions, self-organization and fractal patterning often do a better job explaining plausible connections within human systems than much of the normal, linear science.
Yet, possible face validity does not excuse our need to develop a science that can enable us to speak with confidence on the patterns we see and the meaning — if any — that comes from them. As scientists, it is our job to develop this and take the risks that come with that charge. Perhaps once this evidence based has developed, we’ll see experts discuss the differences between complex and complicated situations citing more than just analogies, but empirical research too.