Scalability is an issue that faces practitioners in systems and design. How do we design systems at scale and if so, what might they look like.
Charles Leadbeater has been on a mission to find ways to make large organizations — particularly those in the social sector – more innovative. Leadbeater, like many social innovators, is hard to pin down to a single title or role. He is at once a researcher, a designer, a systems thinker, and a urbanist. Like most innovators, all and none of these descriptors truly fit.
Leadbeater was in Toronto earlier this week to speak on the issue of innovation in cross-sector collaboration for public good at the MaRS Discovery District. If you’ve seen Leadbeater speak (consider the talks on TED here or elsewehere), you’ll know that you’re in for some English-style self-depricating humour alongside of much about the manner in which people engage in change actions within a system. You’ll also get a lesson in social design, the kind that Victor Papanek advocated for.
To my delight, Leadbeater did not disappoint. Unlike other talks, the value came less from a focused “take home message” and more in a way of conceiving social systems through the combined lens of systems and design thinking (my terms, not his). At the heart of his talk was the challenge we face with building systems and empathy at scale. When things interact, eventually they become understood within a set of boundary conditions and interact, thus making a system. The system in turn begins to establish rules (or rather, the rules determine the system). These emergent properties thus shape the way the system operates or, in social situations, governs or guides the actions of those within it.
The problem is that at certain scales the very factors that create positive social relations, that is those that yield tangible emotional, resource, or informational benefits for one or more parties, get warped under the changes in that scaling. Thus, we have the countless stories of a beloved small business grown in a confined community that becomes a multinational corporation and, in doing so, loses the intimacy and connection to its customers in the process. Companies do this, government organizations show this, and so do cities.
The more one designs for the humans within the system in ways that create meaningful engagement, the greater the empathy. Yet deep empathy is often founded upon intimacy, which is something that is difficult to scale. Leadbeater illustrates the various ways in which firms and cities have addressed this on the graph above. In each case, there are examples that fit. In business for example, there is Ryanair, which embodies a highly structured system with low empathy (top left corner). Opposed to that is the local farmer’s market where one gets to know their grower, experience high mutual empathy, but in a manner that is unique, idiosyncratic and non-systematic in most ways. The challenge is how to design organizations at scale from the cosy-ness of the Farmer’s Market without becoming a Ryanair.
It struck me that the food service industry might be one of the areas where the scalability can be achieved. For example, Starbucks is a gigantic corporation with shops worldwide, yet it still manages to create a very homey, local feel at each one. In the mornings I go to the gym I stop by a location to get a smoothie and my server always remembers my order. At the location near where I take classes, they gave me a free drink because they couldn’t get the computer to take off my 10 cent reusable cup discount. In each location, the benefits were not just in customer service, but in the chit-chat and relationships that I develop with the staff. It’s not like I am speaking to owner-operators at some of the great independent coffee shops around my city, but it is close.
The Starbucks experience was thought through, intentional and thus, by design. We can do this with other systems. The key is whether or not the systems themselves are aware enough to know when they have, indeed, become systems. Starbucks today could not be empathic in exactly the same way as it was when it was a one-shop place at Pikes Market in Seattle. But it can create something similar, which is parallel to Simon’s notion that design is about the science of the artificial.
I’ve been developing and advocating for an approach to creating scale — in time and scope — that I call developmental design. A developmental design approach means shifting and changing over time and designing things in a manner that adjust to the complexities associated with dynamic systems. It brings together complexity, systems, design and the detailed feedback mechanism that comes through developmental evaluation. Leadbeater’s grid helps add to this concept by giving a focus to the development, from one level of empathy to another and one systemic scale to another.
Through thinking in systems and acting through design, perhaps then we can create the kinds of services and organizations that respond to the challenges we face.
And designing for empathy will help us know when we’ve achieved it.
Innovation grants are a misnomer, signifying one of the greatest problems with academic science and the quest to create novel solutions to important problems.
Yesterday the Canadian Cancer Society Research Institute (the research arm of the largest charitable agency that supports cancer programming in Canada) announced its new, revamped lineup of grant funded programs to be launched within the coming months. Among the first of these new programs is one called Innovation Grants (PDF)while another is called Impact Grants (PDF). In fact, both of these new program announcements include the definition of each of the key terms in their program call:
Innovation: The action of innovating; the introduction of novelties; the alteration of what is established by the introduction of new elements or forms. -Oxford English Dictionary
Impact: the action of one object coming forcibly into contact with another; a marked effect or influence -Oxford English Dictionary
This is impressive in how they can clearly and distinctly linked the definition of the word to the program call. Why? Because too often grant program calls and their expression in reality are too often separate. I once served on a grant panel that was looking at grants aimed at ensuring quality knowledge translation only to find that most reviewers were comfortable with things like “prepare academic manuscript based on research” and “present findings at major conference” to be acceptable knowledge translation goals by themselves. I was appalled.
Yet, I can’t help but think, despite the good intentions here, that these new programs are going to follow in similar footsteps. The problem is not the funder, but rather the way that funding is granted and the reliance on the system to change itself.
The innovation grants are designed to :
support unconventional concepts, approaches or methodologies to address problems in cancer research. Innovation projects will include elements of creativity, curiosity, investigation, exploration and opportunity. Successful projects may involve higher risk ideas, but will have the potential for “high reward”, i.e. to significantly impact our understanding of cancer and generate new possibilities to combat the disease by introducing novel ideas into use or practice
The mechanism by which these grants are to be decided are, as much as I can tell, by peer review. It is for that reason alone that we can feel some level of confidence that these grants will fail outright. Peer review is designed to judge the quality of content by what is and has been, not by what could be. “Innovation” is about doing things differently, often markedly so. Scientific panels are about supporting incrementalism, particularly in the social and behavioural sciences.
Innovation is also about risk and the potential for failure. These are two words that are highly problematic in present day academic science. Firstly, if you’re a junior scientist, you may be working desperately to fund yourself and your research (and research team). The price of failure is high. If you’re not able to publish meaningfully off your research, you will have a hard time getting your next grant and keeping yourself afloat. In public health sciences for example, CLTA (contract limited-term appointments) are dominant.
I should know as that’s the position I hold.
But the tenured faculty don’t have it much better. While they are more secure, their research teams, graduate student trainees who rely on projects to develop their skills, and the ability to develop coherent programs of research are at risk every time there is an unsuccessful grant. There are real opportunity costs to pursuing risky ventures so many don’t do it
As one who has tried to be innovative with his work and having the privilige (or curse, depending on the perspective) of having interests that have fallen into the innovation category (or “trendy” category to the cynic), I’ve seen how innovation is treated and it’s not good. Innovation programs tend to split committees. I’ve had too many comments returned to me that have some variant on “this is amazing, potentially leading edge research!” alongside “the use of non-conventional methods makes this suspect” or “I don’t understand what this is supposed to do“. As one who had to endure years of questions like “I don’t see how this Internet thing has anything to do with health” in the early days of the eHealth this kind of line of questioning is familiar to me.
I point this out not to gripe, but to illustrate how innovation can get treated in academia. When you get feedback like I described it is very hard to critically assess the true merits of the proposal for improvement. Did people not understand an idea because it could have been written more clearly or did they just not “get” the innovation? Were those who were excited just caught up in the “newness” or were they really in sync with my vision? As a scientist, I don’t know the answer and can’t improve because the feedback is so contradictory.
And because innovators often create, develop or define fields of inquiry or practice that does not exist or is in development there are few if any adequate and available reviewers with the appropriate background on the topic.
In academia, we rely on tradition, on evidence (which is part of tradition, what has been done before), not on strategic foresight and innovation to guide us. That is a problem in itself. Universities haven’t survived hundreds of years by being risky, they have because they were safe (in spite of the occasional radical shift here and there). With complex social problems and the challenges posed by things like cancer, something risky is needed because the traditional ways of doing things have either been exhausted or are no longer producing the necessary health gains. Academics just aren’t positioned to embrace this risk unless the system changes — with them helping drive that change — to support innovation and not just talk about it.
Until that happens, the opportunities to live up to the definition of innovation posed about to create the impact described above will be limited indeed.
Before acting in a manner consistent with complexity principles, people need to understand what they are, how they are different from other systems, and what it means for their work. With mainstream education, professional practice so geared to linear forms of learning this bodes poorly for building better systems thinkers.
“Let’s just throw some social media at it” is a variant of an expression I often hear in my work in health communications consulting and training. Organizations seeking to use the new tools and media employed by Facebook, Twitter, and YouTube genuinely want to “get in the game” and use them effectively. Where things get problematic is when I tell them that social media is principally about building relationships and that extends to organizations: you need to relate and therefore act according to how you build relationships.
Just as no one (at least no one I’ve met) would consider drawing up a flowchart and showing a prospective mate the planned trajectory of their dating relationship with milestone targets and deliverables, no organization should think that they can just shovel content to people and expect their audience to relate better to them.
At first one might attribute this to a lack of understanding of social media, but that is only a small part of it. Empathy is another. But the third and perhaps biggest reason is a fundamental lack of understanding of complexity and what it means.
The seductive nature of the “best practice” and the prescription for change in 5,7, 10, 12 or whatever easy steps is something that is endemic in our society. These forms of thought suggest a linear trajectory of events, suggest an ability to control for externalities and parse out their impact, and provide a prescriptive solution that removes much of the worry about unknowns. But H. L. Mencken’s often quoted phrase (which I’ve used often) suggests the folly in this.
Simplicity is another way to get around complexity. It is something sought, but rarely achieved in its application to the lived reality of the human condition, and although much discussed it hasn’t been widely achieved as a means of policy effectiveness. The reason lies with the nature of complexity itself and its resistance to reductionism. Evidence from biology through psychology (see previous links for examples) points to the considerable problem that science has with applying linear modes of thought and inquiry to complex systems.
The problems here are multifold and complicated, if not complex.
1. Our education system is designed for linear, progressive modes of learning not discovery and non-linearity. We sit kids (and adults) in rows, we talk at them, we present material front-to-back. In short, we don’t design education for learning, but for knowledge transmission. Complexity is all about learning. Every situation has a degree of novelty to it that presents new challenges and what happens today might not be the same thing that happens tomorrow even if much is similar. Teaching to discover, adapt, play and risk is something our system doesn’t do well. How can we expect complexity and systems thinking to thrive when the muscles used
2. It’s more convienient to think in dichotomies than spectrums. As I’ve written previously, spectral thinking is something critical to many of the issues we face in complex systems. Good/bad, strong/weak, X/Y lose their meaning in complex environments where there is a. Of all the dichotomies that work, only Ying/Yang comes close. But its a more difficult concept to grasp that maybe things aren’t all one way or the other, that there is use in even something that isn’t well constructed. This problem (and the ones that follow) are tied to the first one: education and learning systems are not set up for this. We are primed for either/or thinking. Think in criminal justice terms how easy it is to demand harsh punishment for criminal acts without considering that the perpetrators are human too, even if their behaviour is unacceptable.
3. Our decision-making tools are ill-equipped to handle ambiguity. Health care is a great example of how badly we do at complexity thinking. Consider the systematic review, often viewed as the gold standard for evidence for adoption into healthcare organizations. If it has a good systematic review, then the chances that we will see that evidence translated into practice is good, right? No. Surprisingly, even systematic reviews of systematic review use shows a mixed bag in adoption. Systematic reviews are designed to reduce ambiguity, but (for those on human social systems at least) they only illustrate how much there is. A systematic review only looks at the evidence created, it doesn’t include all those questions that were never asked, never funded for inquiry, or couldn’t be structured in a manner that fits the criteria for a good review. It is, by its design, reductionistic in its approach to complexity.
4. Our institutions are resistant to complexity. Complexity takes time, nuance, and relationship development; all the things that screw up plans. You can’t plan a relationship, but you can anticipate some things. You might even be able to use scenario tools and strategic foresight methods to anticipate what might happen, but you can’t plan it. John Lennon is right:
Life is what happens when you’re busy making other plans
While we plan, the complex systems move along. We can plan and fail, fail and plan, or plan to fail and work build the strategic foresight to know what to do with these “failures”.
So now what? Being aware of these things is a start, but making systems change is really the key. Making change is about questioning the way we have been taught to learn, and what our assumptions are about the universe are. Learning the difference between a simple, complicated, complex and chaotic system and the means to identify when those systems present themselves (and how they often change) is another. This means finding like minds, sharing stories, and building networks. It means creating space for relationships — even in our linear planning models if we must keep them (or better yet, get rid of most of them) — and considering what kind of returns we get from paying attention, being mindful of our systems, and what kind of things contemplative inquiry might offer that simple, detached data analysis does.
These are starting points, but not all of them. Addressing the challenge of complexity is, ironically or perhaps appropriately, complex. But the challenge of dealing with the negative outcomes resulting from overly simple approaches to dealing with complexity will ultimately be far more so.