Posted on May 17, 2016
Innovation is easier to say than to do. One of the reasons is that a new idea needs to fit within a mindset or frame that is accustomed to seeing the way things are, not what they could be, and its in changing this frame that innovators might find their greatest obstacles and opportunities.
Innovation, its creation and distribution is a considerable challenge to take up when the world is faced with so many problems related to the way we do things. The need to change what we do and how we live was brought into stark view this week as reports came out suggesting that April was the hottest month in history, marking the third month in a row that a record has been beaten by a large margin.
If we are to mitigate or mediate the effects of climate change we will need to innovate on matters of technology, social and economic policy, bioscience, education and conservation….and fast and on a planetary scale that we’ve never seen before.
In the case of climate change we are seeing the world and the causes and consequences posed by it through a frame. A frame is defined as:
frame |frām| noun
1) a rigid structure that surrounds or encloses something such as a door or window, 2) [ usu. in sing. ] a basic structure that underlies or supports a system, concept, or text: the establishment of conditions provides a frame for interpretation.
When discussing innovation we often draw upon both of these definitions of a frame — both a rigid, enclosing structure and something that supports our understanding of a system. Terms like rigidity can imply strength, but it also resists change.
Missing the boat for the sea
If we continually look at the sea we may assume it’s always the same and fail to notice the boat that can take us across and through it. In a recent interview with the Atlantic magazine, journalist Tom Vanderbilt discusses how we can miss new opportunities because we feel we know what we like already, much like the kid who doesn’t want to eat a vegetable she’s never even tasted before. Vanderbilt hits on something critical: the absence of language to covey what the ‘new’ is:
I think often we really are lacking the language, and the ways to frame it. If you look at films like Blade Runner or The Big Lebowski, when these films came out they were box office disasters. I think part of that was a categorization thing—not knowing how to think about it in the right way. Blade Runner didn’t really match up with the existing tropes of science fiction, Big Lebowski was just kind of strange
Today, both Blade Runner and The Big Lebowski are hailed as classics — only after the fact. It’s very much like the Apple Newton in the 1980’s failing more than 20 years before the iPad arrived even though it was a decent product.
Believing to see
A traditional evidence-based approach to change is that you must see it to believe it. In innovation, we often need to believe in order to see. This is particularly true in complex contexts where the linkages between cause-and-effect with evidence are less obviously made.
However, it’s more than about belief in evidence, it’s belief in possibility. It is for this reason that foresight can make such an important contribution to the innovation process. Strategic foresight can provide an imaginative, yet data-supported way of envisioning possible futures, outcomes and circumstances. It is a means of enabling us to see future states in possibility, which enable us to better ensure that we are ready to see the present when it comes.
This is part of the thinking behind training exercises, particularly obvious in sports. A team might imagine a number of scenarios, which may not happen as outlined during a game, but because the team has imagined certain things to be possible, there is an opportunity to have rehearsed or anticipated ways to deal with what comes up in reality and thus helps them to believe something enough to see it when it comes.
Spending time envisioning possible futures, whether through a deliberative process like strategic foresight, or simply allowing yourself time to notice trends and possibilities and how they might connect can be a means of imagining possibilities and preparing you to meet them (or create them) sometime down the road.
Do so gives you the power to select what frame fits what picture.
Change-making is the process of transformation and not to be confused with the transformed outcome that results from such a process. We confuse the two at our peril.
“We are changing the world” is a rallying cry from many individuals and organizations working in social innovation and entrepreneurship which is both a truth and untruth at the same time. Saying you’re changing the world is far easier than actually doing it. One is dramatic — the kind that make for great reality TV as we’ll discuss — and the other is rather dull, plodding and incremental. But it may be the latter that really wins the day.
Organizations like Ashoka (and others) promote themselves as a change-maker organization authoring blogs titled “everything you need to know about change-making”. That kind of language, while attractive and potentially inspiring to diverse audiences, points to a mindset that views social change in relatively simple, linear terms. This line of thinking suggests change is about having the right knowledge and the right plan and the ability to pull it together and execute.
This is a mindset that highlights great people and great acts supported by great plans and processes. I’m not here to dismiss the work that groups like Ashoka do, but to ask questions about whether the recipe approach is all that’s needed. Is it really that simple?
Lies like: “It’s calories in, calories out”
Too often social change is viewed with the same flawed perspective that weight loss is. Just stop eating so much food (and the right stuff) and exercise and you’ll be fine — calories in and out as the quote suggests — and you’re fine. The reality is, it isn’t that simple.
A heartbreaking and enlightening piece in the New York Times profiled the lives and struggles of past winners of the reality show The Biggest Loser (in parallel with a new study released on this group of people (PDF)) that showed that all but one of the contestants regained weight after the show as illustrated below:
The original study, published in the journal Obesity, considers the role of metabolic adaptation that takes place with the authors suggesting that a person’s metabolism makes a proportional response to compensate for the wide fluctuations in weight to return contestants to their original pre-show weight.
Consider that during the show these contestants were constantly monitored, given world-class nutritional and exercise supports, had tens of thousands of people cheering them on and also had a cash prize to vie for. This was as good as it was going to get for anyone wanting to lose weight shy of surgical options (which have their own problems).
Besides being disheartening to everyone who is struggling with obesity, the paper illuminates the inner workings of our body and reveals it to be a complex adaptive system rather than the simple one that we commonly envision when embarking on a new diet or fitness regime. Might social change be the same?
We can do more and we often do
I’m fond of saying that we often do less than we think and more than we know.
That means we tend to expect that our intentions and efforts to make change produce the results that we seek directly and because of our involvement. In short, we treat social change as a straightforward process. While that is sometimes true, rare is it that programs aiming at social change coming close to achieving their stated systems goals (“changing the world”) or anything close to it.
This is likely the case for a number of reasons:
- Funders often require clear goals and targets for programs in advance and fund based on promises to achieve these results;
- These kind of results are also the ones that are attractive to outside audiences such as donors, partners, academics, and the public at large (X problem solved! Y number of people served! Z thousand actions taken!), but may not fully articulate the depth and context to which such actions produce real change;
- Promising results to stakeholders and funders suggests that a program is operating in a simple or complicated system, rather than a complex one (which is rarely, if ever the case with social change);
- Because program teams know these promised outcomes don’t fit with their system they cherry-pick the simplest measures that might be achievable, but may also be the least meaningful in terms of social change.
- Programs will often further choose to emphasize those areas within the complex system that have embedded ordered (or simple) systems in them to show effect, rather than look at the bigger aims.
The process of change that comes from healthy change-making can be transformative for the change-maker themselves, yet not yield much in the way of tangible outcomes related to the initial charge. The reasons likely have to do with the compensatory behaviours of the system — akin to social metabolic adaptation — subduing the efforts we make and the initial gains we might experience.
Yet, we do more at the same time. Danny Cahill, one of the contestants profiled in the story for the New York Times, spoke about how the lesson learned from his post-show weight gain was that the original weight gain wasn’t his fault in the first place
“That shame that was on my shoulders went off”
What he’s doing is adapting his plan, his goals and working differently to rethink what he can do, what’s possible and what is yet to be discovered. This is the approach that we take when we use developmental evaluation; we adapt, evolve and re-design based on the evidence while continually exploring ways to get to where we want to go.
A marathon, not a sprint, in a laboratory
The Biggest Loser is a sprint: all of the change work compressed into a short period of time. It’s a lab experiment, but as we know what happens in a laboratory doesn’t always translate directly into the world outside its walls because the constraints have changed. As the show’s attending physician, Dr. Robert Huizenga, told the New York Times:
“Unfortunately, many contestants are unable to find or afford adequate ongoing support with exercise doctors, psychologists, sleep specialists, and trainers — and that’s something we all need to work hard to change”
This quote illustrates the fallacy of real-world change initiatives and exposes some of the problems we see with many of the organizations who claim to have the knowledge about how to change the world. Have these organizations or funders gone back to see what they’ve done or what’s left after all the initial funding and resources were pulled? This is not just a public, private or non-profit problem: it’s everywhere.
I have a colleague who spent much time working with someone who “was hired to clean up the messes that [large, internationally recognized social change & design firm] left behind” because the original, press-grabbing solution actually failed in the long run. And the failure wasn’t in the lack of success, but the lack of learning because that firm and the funders were off to another project. Without building local capacity for change and a sustained, long-term marathon mindset (vs. the sprint) we are setting ourselves up for failure. Without that mindset, lack of success may truly be a failure because there is no capacity to learn and act based on that learning. Otherwise, the learning is just a part of an experimental approach consistent with an innovation laboratory. The latter is a positive, the former, not so much.
Part of the laboratory approach to change is that labs — real research labs — focus on radical, expansive, long-term and persistent incrementalism. Now that might sound dull and unsexy (which is why few seem to follow it in the social innovation lab space), but it’s how change — big change — happens. The key is not in thinking small, but thinking long-term by linking small changes together persistently. To illustrate, consider the weight gain conundrum as posed by obesity researcher Dr. Michael Rosenbaum in speaking to the Times:
“We eat about 900,000 to a million calories a year, and burn them all except those annoying 3,000 to 5,000 calories that result in an average annual weight gain of about one to two pounds,” he said. “These very small differences between intake and output average out to only about 10 to 20 calories per day — less than one Starburst candy — but the cumulative consequences over time can be devastating.”
Building a marathon laboratory
Marathoners are guided by a strange combination of urgency, persistence and patience. When you run 26 miles (42 km) there’s no sprinting if you want to finish the same day you started. The urgency is what pushes runners to give just a little more at specific times to improve their standing and win. Persistence is the repetition of a small number of key things (simple rules in a complex system) that keep the gains coming and the adaptations consistent. Patience is knowing that there are few radical changes that will positively impact the race, just a lot of modifications and hard work over time.
Real laboratories seek to learn a lot, simply and consistently and apply the lessons from one experiment to the next to extend knowledge, confirm findings, and explore new territory.
Marathons aren’t as fun to watch as the 100m sprint in competitive athletics and lab work is far less sexy than the mythical ‘eureka’ moments of ‘discovery’ that get promoted, but that’s what changes the world. The key is to build organizations that support this. It means recognizing learning and that it comes from poor outcomes as well as positive ones. It encourages asking questions, being persistent and not resting on laurels. It also means avoiding getting drawn into being ‘sexy’ and ‘newsworthy’ and instead focusing on the small, but important things that make the news possible in the first place.
Doing that might not be as sweet as a Starburst candy, but it might avoid us having to eat it.
Would we invest in something if we had little hard data to suggest what we could expect to gain from that investment? This is often the case with social programs, yet its a domain that has resisted the kind of data-driven approaches to investment that we’ve seen in other sectors and one theory is that we can approach change in the same way we code the genome, but: is that a good idea?
Jason Saul is a maverick in social impact work and dresses the part: he’s wearing a suit. That’s not typically the uniform of those working in the social sector railing against the system, but that’s one of the many things that gets people talking about what he and his colleagues at Mission Measurement are trying to do. That mission is clear: bring the same detailed analysis of the factors involved in contributing to real impact from the known evidence that we would do to nearly any other area of investment.
The way to achieving this mission is to take the thinking behind the Music Genome Project, the algorithms that power the music service Pandora, and apply it to social impact. This is a big task and done by coding the known literature on social impact from across the vast spectrum of research from different disciplines, methods, theories and modeling techniques. A short video from Mission Measurement on this approach nicely outlines the thinking behind this way of looking at evaluation, measurement, and social impact.
Saul presented his vision for measurement and evaluation to a rapt audience in Toronto at the MaRS Discovery District on April 11th as part of their Global Leaders series en route to the Skoll World Forum ; this is a synopsis of what came from that presentation and it’s implications for social impact measurement.
(Re) Producing change
Saul began his presentation by pointing to an uncomfortable truth in social impact: We spread money around with good intention and little insight into actual change. He claims (no reference provided) that 2000 studies are published per day on behaviour change, yet there remains an absence of common metrics and measures within evaluation to detect change. One of the reasons is that social scientists, program leaders, and community advocates resist standardization making the claim that context matters too much to allow aggregation.
Saul isn’t denying that there is truth to the importance of context, but argues that it’s often used as an unreasonable barrier to leading evaluations with evidence. To this end, he’s right. For example, the data from psychology alone shows a poor track record of reproducibility, and thus offers much less to social change initiatives than is needed. As a professional evaluator and social scientist, I’m not often keen to being told how to do what I do, (but sometimes I benefit from it). That can be a barrier, but also it points to a problem: if the data shows how poorly it is replicated, then is following it a good idea in the first place?
Are we doing things righter than we think or wronger than we know?
To this end, Saul is advocating a meta-evaluative perspective: linking together the studies from across the field by breaking down its components into something akin to a genome. By looking at the combination of components (the thinking goes) like we do in genetics we can start to see certain expressions of particular behaviour and related outcomes. If we knew these things in advance, we could potentially invest our energy and funds into programs that were much more likely to succeed. We also could rapidly scale and replicate programs that are successful by understanding the features that contribute to their fundamental design for change.
The epigenetic nature of change
Genetics is a complex thing. Even on matters where there is reasonably strong data connecting certain genetic traits to biological expression, there are few examples of genes as ‘destiny’as they are too often portrayed. In other words, it almost always depends on a number of things. In recent years the concept of epigenetics has risen in prominence to provide explanations of how genes get expressed and it has as much to do with what environmental conditions are present as it is the gene combinations themselves . McGill scientist Moshe Szyf and his colleagues pioneered research into how genes are suppressed, expressed and transformed through engagement with the natural world and thus helped create the field of epigenetics. Where we once thought genes were prescriptions for certain outcomes, we now know that it’s not that simple.
By approaching change as a genome, there is a risk that the metaphor can lead to false conclusions about the complexity of change. This is not to dismiss the valid arguments being made around poor data standardization, sharing, and research replication, but it calls into question how far the genome model can go with respect to social programs without breaking down. For evaluators looking at social impact, the opportunity is that we can systematically look at the factors that consistently produce change if we have appropriate comparisons. (That is a big if.)
Saul outlined many of the challenges that beset evaluation of social impact research including the ‘file-drawer effect’ and related publication bias, differences in measurement tools, and lack of (documented) fidelity of programs. Speaking on the matter in response to Saul’s presentation, Cathy Taylor from the Ontario Non-Profit Network, raised the challenge that comes when much of what is known about a program is not documented, but embodied in program staff and shared through exchanges. The matter of tacit knowledge and practice-based evidence is one that bedevils efforts to compare programs and many social programs are rich in context — people, places, things, interactions — that remain un-captured in any systematic way and it is that kind of data capture that is needed if we wish to understand the epigenetic nature of change.
Unlike Moshe Szyf and his fellow scientists working in labs, we can’t isolate, observe and track everything our participants do in the world in the service of – or support to – their programs, because they aren’t rats in a cage.
Systems thinking about change
One of the other criticisms of the model that Saul and his colleagues have developed is that it is rather reductionist in its expression. While there is ample consideration of contextual factors in his presentation of the model, the social impact genome is fundamentally based on reductionist approaches to understanding change. A reductionist approach to explaining social change has been derided by many working in social innovation and environmental science as outdated and inappropriate for understanding how change happens in complex social systems.
What is needed is synthesis and adaptation and a meta-model process, not a singular one.
Saul’s approach is not in opposition to this, but it does get a little foggy how the recombination of parts into wholes gets realized. This is where the practical implications of using the genome model start to break down. However, this isn’t a reason to give up on it, but an invitation to ask more questions and to start testing the model out more fulsomely. It’s also a call for systems scientists to get involved, just like they did with the human genome project, which has given us great understanding of what influences our genes have and stressed the importance of the environment and how we create or design healthy systems for humans and the living world.
At present, the genomic approach to change is largely theoretical backed with ongoing development and experiments but little outcome data. There is great promise that bigger and better data, better coding, and a systemic approach to looking at social investment will lead to better outcomes, but there is little actual data on whether this approach works, for whom, and under what conditions. That is to come. In the meantime, we are left with questions and opportunities.
Among the most salient of the opportunities is to use this to inspire greater questions about the comparability and coordination of data. Evaluations as ‘one-off’ bespoke products are not efficient…unless they are the only thing that we have available. Wise, responsible evaluators know when to borrow or adapt from others and when to create something unique. Regardless of what design and tools we use however, this calls for evaluators to share what they learn and for programs to build the evaluative thinking and reflective capacity within their organizations.
The future of evaluation is going to include this kind of thinking and modeling. Evaluators, social change leaders, grant makers and the public alike ignore this at their peril, which includes losing opportunities to make evaluation and social impact development more accountable, more dynamic and impactful.
About the author: Cameron Norman is the Principal of Cense Research + Design and assists organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.
Posted on April 4, 2016
The quest for excellence within social programs relies on knowing what excellence means and how programs compare against others. Benchmarks can enable us to compare one program to another if we have quality comparators and an evaluation culture to generate them – something we currently lack.
A benchmark is something used by surveyors to provide a means of holding a levelling rod to determine some consistency in elevation measurement of a particular place that could be compared over time. A benchmark represents a fixed point for measurement to allow comparisons over time.
The term benchmark is often used in evaluation as a means of providing comparison between programs or practices, often taking one well-understood and high performing program as the ‘benchmark’ to which others are compared. Benchmarks in evaluation can be the standard to which other measures compare.
In a 2010 article for the World Bank (PDF), evaluators Azevedo, Newman and Pungilupp, articulate the value of benchmarking and provide examples for how it contributes to the understanding of both absolute and relative performance of development programs. Writing about the need for benchmarking, the authors conclude:
In most benchmarking exercises, it is useful to consider not only the nature of the changes in the indicator of interest but also the level. Focusing only on the relative performance in the change can cause the researcher to be overly optimistic. A district, state or country may be advancing comparatively rapidly, but it may have very far to go. Focusing only on the relative performance on the level can cause the researcher to be overly pessimistic, as it may not be sufficiently sensitive to pick up recent changes in efforts to improve.
Compared to what?
One of the challenges with benchmarking exercises is finding a comparator. This is easier for programs operating with relatively simple program systems and structures and less so for more complex ones. For example, in the service sector wait times are a common benchmark. In the province of Ontario in Canada, the government provides regularly updated wait times for Emergency Room visits via a website. In the case of healthcare, benchmarks are used in multiple ways. There is a target that is used as the benchmark, although, depending on the condition, this target might be on a combination of aspiration, evidence, as well as what the health system believes is reasonable, what the public demands (or expects) and what the hospital desires.
Part of the problem with benchmarks set in this manner is that they are easy to manipulate and thus raise the question of whether they are true benchmarks in the first place or just goals.
If I want to set a personal benchmark for good dietary behaviour of eating three meals a day, I might find myself performing exceptionally well as I’ve managed to do this nearly every day within the last three months. If the benchmark is consuming 2790 calories as is recommended for someone of my age, sex, activity levels, fitness goals and such that’s different. Add on that, within that range of calories, the aim is to have about 50% of those come from carbohydrates, 30% from fat and 20% from protein, and we a very different set of issues to consider when contemplating how performance relates to a standard.
One reason we can benchmark diet targets is that the data set we have to set that benchmark is enormous. Tools like MyFitnessPal and others operate to use benchmarks to provide personal data to its users to allow them to do fitness tracking using these exact benchmarks that are gleaned from having 10’s of thousands of users and hundreds of scientific articles and reports on diet and exercise from the past 50 years. From this it’s possible to generate reasonably appropriate recommendations for a specific age group and sex.
These benchmarks are also possible because we have internationally standardized the term calorie. We have further internationally recognized, but slightly less precise, measures for what it means to be a certain age and sex. Activity level gets a little more fuzzy, but we still have benchmarks for it. As the cluster of activities that define fitness and diet goals get clustered together we start to realize that it is a jumble of highly precise and somewhat loosely defined benchmarks.
The bigger challenge comes when we don’t have a scientifically validated standard or even a clear sense of what is being compared and that is what we have with social innovation.
Creating an evaluation culture within social innovation
Social innovation has a variety of definitions, however the common thread of these is that its about a social program aimed at address social problems using ideas, tools, policies and practices that differ from the status quo. Given the complexity of the environments that many social programs are operating, it’s safe to assume that social innovation** is happening all over the world because the contexts are so varied. The irony is that many in this sector are not learning from one another as much as they could, further complicating any initiative to build benchmarks for social programs.
Some groups like the Social Innovation Exchange (SIX) are trying to change that. However, they and others like them, face an uphill battle. Part of the reason is that social innovation has not established a culture of evaluation within it. There remains little in the way of common language, frameworks, or spaces to share and distribute knowledge about programs — both in description and evaluation — in a manner that is transparent and accessible to others.
Competition for funding, the desire to paint programs in a positive light, lack of expertise, not enough resources available for dissemination and translation, absence of a dedicated space for sharing results, and distrust or isolation from academia among certain sectors are some reasons that might contribute to this. For example, the Stanford Social Innovation Review is among the few venues dedicated to scholarship in social innovation aimed at a wide audience. It’s also a venue focused largely on international development and what I might call ‘big’ social innovation: the kind of works that attract large philanthropic resources. There’s lot of other types of social innovation and they don’t all fit into the model that SSIR promotes.
From my experiences, many small organizations or initiatives struggle to fund evaluation efforts sufficiently, let alone the dissemination of the work once it’s finished. Without good quality evaluations and the means to share their results — whether or not they cast a program in positive light or not — it’s difficult to build a culture where the sector can learn from one another. Without a culture of evaluation, we also don’t get the volume of data and access to comparators — appropriate comparators, not just the only things we can find — to develop true, useful benchmarks.
Culture’s feast on strategy
Building on the adage attributed to Peter Drucker that culture eats strategy for breakfast (or lunch) it might be time that we use that feasting to generate some energy for change. If the strategy is to be more evidence based, to learn more about what is happening in the social sector, and to compare across programs to aid that learning there needs to be a culture shift.
This requires some acknowledgement that evaluation, a disciplined means of providing structured feedback and monitoring of programs, is not something adjunct to social innovation, but a key part of it. This is not just in the sense that evaluation provides some of the raw materials (data) to make informed choices that can shape strategy, but that it is as much a part of the raw material for social change as enthusiasm, creativity, focus, and dissatisfaction with the status quo on any particular condition.
We are seeing a culture of shared ownership and collective impact forming, now it’s time to take that further and shape a culture of evaluation that builds on this so we can truly start sharing, building capacity and developing the real benchmarks to show how well social innovation is performing. In doing so, we make social innovation more respectable, more transparent, more comparable and more impactful.
Only by knowing what we are doing and have done can we really sense just how far we can go.
** For this article, I’m using the term social innovation broadly, which might encompass many types of social service programs, government or policy initiatives, and social entrepreneurship ventures that might not always be considered social innovation.
About the author: Cameron Norman is the Principal of Cense Research + Design and works at assisting organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.
Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change?
What is the value of design?
If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.
Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.
‘Valuating’ design, evaluating innovation
On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:
Program design is both a verb and a noun.
It is the process that organizations use to develop a program. Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.
A program design is also the plan of action that results from that process. Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.
One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.
The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.
We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.
A bigger perspective on value creation
The process of placing or determining value* of a program is about looking at three things:
1. The plan (the program design);
2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);
3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).
Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.
A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.
One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.
Designing opportunities to learn more
Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.
In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.
Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.
Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’ Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.
Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.
Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.
When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.
*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.
Image credit: author
Posted on February 8, 2016
The costs of books, materials, tuition, or conference fees often distort the perception of how much learning costs, creating larger distortions in how we perceive knowledge to benefit us. By looking at what price we pay for integrating knowledge and experience we might re-valuate what we need, what we have and what we pay attention to in our learning and innovation quest.
A quote paraphrased and attributed to German philosopher Arthur Schopenhauer points to one of the fundamental problems facing books:
Buying books would be a good thing if one could also buy the time to read them in: but as a rule the purchase of books is mistaken for the appropriation of their contents.
Schopenhauer passed away in 1860 when the book was the dominant media form of codified knowledge and the availability of books was limited. This was before radio, television, the Internet and the confluence of it all in today’s modern mediascape from Amazon to the iPhone and beyond.
Schopenhauer exposes the fallacy of thought that links having access to information to knowledge. This fallacy underpins the major challenges facing our learning culture today: quantity of information vs quality of integration.
Consider something like a conference or seminar. How often have you attended a talk or workshop and been moved by what you heard and saw, took furious notes, and walked out of the room vowing to make a big change based on what you just experienced? And then what happened? My guess is that the world outside that workshop or conference looked a lot different than it appeared in it. You had emails piled up, phone messages to return, colleagues to convince, resources to marshall, patterns to break and so on.
Among the simple reasons is that we do not protect the time and resources required to actually learn and to integrate that knowledge into what we do. As a result, we mistakenly look at the volume of ‘things’ we expose ourselves to for learning outcomes.
One solution is to embrace what consultant, writer and blogger Sarah Van Bargen calls “intentional ignorance“. This approach involves turning away from the ongoing stream of data and accepting that there are things we won’t know and that we’ll just miss. Van Bargen isn’t calling for a complete shutting of the door, rather something akin to an information sabbatical or what some might call digital sabbath. Sabbath and sabbatical share the Latin root sabbatum, which means “to rest”.
Rebecca Rosen who writes on work and business for The Atlantic argues we don’t need a digital sabbath, we need more time. Rosen’s piece points to a number of trends that are suggesting the way we work is that we’re producing more, more often and doing it more throughout the day. The problem is not about more, it’s about less. It’s also about different.
Time, by design
One of the challenges is our relationship to time in the first place and the forward orientation we have to our work. We humans are designed to look forward so it is not a surprise that we engineer our lives and organizations to do the same. Sensemaking is a process that orients our gaze to the future by looking at both the past and the present, but also by taking time to look at what we have before we consider what else we need. It helps reduce or at least manage complex information to enable actionable understanding of what data is telling us by putting it into proper context. This can’t be done by automation.
It takes time.
….setting aside time to look at the data and discuss it with those who are affected by it, who helped generate it, and are close to the action;
….taking time to gather the right kind of information, that is context-rich, measures things that have meaning and does so with appropriate scope and precision;
….create organizational incentives and protections for people to integrate what they know into their jobs and roles and to create organizations that are adaptive enough to absorb, integrate and transform based on this learning — becoming a true learning organization.
By changing the practices within an organization we can start shifting the way we learn and increase the likelihood of learning taking place.
Imagine buying both the book and the time to read the book and think about it. Imagine sending people on courses and then giving them the tools and opportunity to try the lessons (the good ones at least) in practice within the context of the organization. If learning is really a priority, what kind of time is given to people to share what they know, listen to others, and collectively make sense of what it means and how it influences strategy?
What we might find is that we do less. We buy less. We attend less. We subscribe to less. Yet, we absorb more and share more and do more as a result.
The cost of learning then shifts — maybe even to less than we spend now — but what it means is that we factor in time not just product in our learning and knowledge production activities.
This can happen and it happens through design.
Photo credit by Tim Sackton used under Creative Commons License via Flickr.
Abraham Lincoln quote image from TheQuotepedia.
Posted on February 4, 2016
Collective impact is based largely on the concept that we can do more together than apart, which holds true under the assumption that we can coordinate, organize and execute as a unit. This assumption doesn’t always hold true and the implications for getting it wrong require serious attention.
Anyone interested in social change knows that they can’t do it alone. Society, after all, is a collective endeavour — even if Margaret Thatcher suggested it didn’t exist. Thatcherites aside, that is about where agreement ends. Social change is complex, fraught with disagreements, and challenging for even the most skilled organizer because of the multitude of perspectives and disparate spread of individuals, groups and organizations across the system.
Social media (and the Internet more widely) was seen as means of bridging these gaps, bringing people together and enabling them to organize and make social change. Wael Ghonim, one of the inspirational forces behind Egypt’s Arab Spring movement, believed this to be true, saying:
If you want to liberate society all you need is the Internet
But as he acknowledges now, he was wrong.
None of us is as smart as all of us
Blanchard’s quote is meant to illustrate the power of teams and working together; something that we can easily take for granted when we seek to do collective action. Yet, what’s often not discussed are the challenges that our new tools present for true systems change.
Complex (social) systems thrive on diversity, the interaction between ideas and the eventual coordination and synchrony between actions into energy. That requires some agreement, strategy and leadership before the change state becomes the new stable state (the changed state). Change comes from a coalescing of perspectives into some form of agreement that can be transformed into a design and then executed. It’s messy, unpredictable, imprecise, can take time and energy, but that is how social change happens.
At least, that’s how it has happened. How it’s happening now is less clear thanks to social media and it’s near ubiquitous role in social movements worldwide.
The same principles underpinning complex social systems hasn’t changed, but what we’re seeing is that the psychology of change and the communications that takes place within those systems is. When one reviews or listens to the stories told about social change movements from history, what we see over and again is the power of stories.
Stories take time to tell them, are open to questions, and can get more powerful in their telling and retelling. They engage us and, because they take time, grant us time to reflect on their meaning and significant. It’s a reason why we see plays, read novels, watch full-length films, and spend time with friends out for coffee…although this all might be happening less and less.
Social media puts pressure on that attention, which is part of the change process. Social media’s short-burst communication styles — particularly with Tweets, Snapchat pictures, Instragram shots and Facebook posts — make it immensely portable and consumable, yet also highly problematic for longer narratives. The social media ‘stream’, something discussed here before, provides a format that tends to confirm our own beliefs and perspectives, not challenge them, by giving us what we want even if that’s not necessarily what we need for social change.
When we are challenged the anonymity, lack of social cues, immediacy, and reach of social media can make it too easy for our baser natures to override our thoughts and lash out. Whether its Wael Ghomim and Egypt’s Arab Spring or Hossein Derakhshan and Iranian citizen political movement or the implosion of the Occupy movement , the voices of constructive dissent and change can be overwhelmed by infighting and internal dissent, never allowing that constructive coalescing of perspective needed to focus change.
Collectively, we may be more likely to reflect one of the ‘demotivation’ posters from Despair instead of Ken Blanchard:
None of us is as dumb as all of us
Social media, the stream and the hive
Ethan Zuckerman of the MIT Media Lab has written extensively about the irony of the social insularity that comes with the freedom and power online social networks introduce as was explored in a previous post.
The strength of a collective impact approach is that it aims to codify and consolidate agreement, including the means for evaluating impact. To this end, it’s a powerful force for change if the change that is sought is of a sufficient value to society and that is where things get muddy. I’ve personally seen many grand collaboratives fall to irrelevancy because the only agreements that participants can come up with are broad plaudits or truisms that have little practical meaning.
Words like “impact”, “excellence”, “innovation” and “systems change” are relatively meaningless if not channeled into a vision that’s attainable through specific actions and activities. The specifics — the devil in the details — comes from discussion, debate, concession, negotiation and reflection, all traits that seem to be missing when issues are debated via social media.
What does this mean for collective impact?
If not this, then what?
This is not a critique of collective activity, because working together is very much like what Winston Churchill said about democracy and it’s failings still making it better than the alternatives. But it’s worth asking some serious questions and researching what collective impact means in practice and how to we engage it with the social tools that are now a part of working together (particularly at a distance). These questions require research and systemic inquiry.
Terms like social innovation laboratories or social labs are good examples of an idea that sounds great (and may very well be so), yet has remarkably little evidence behind it. Collective impact risks falling into the same trap if it is not rigorously, critically evaluated and that the evaluation outcomes are shared. This includes asking the designer’s and systems thinker’s question: are we solving the right problem in the first place? (Or are we addressing some broad, foggy ideal that has no utility in practice for those who seek to implement an initiative?)
Among the reasons brainstorming is problematic is that it fails to account for power and for the power of the first idea. Brainstorming favours those ideas that are put forward first with participants commonly reacting to those ideas, which immediately reduces the scope of vision. A far more effective method is having participants go off and generate ideas independently and then systematically introducing those to the group in a manner that emphasizes the idea, not the person who proposed it. Research suggests it needs to be well facilitated [PDF].
There may be an argument that we need better facilitation of ideas through social media or, perhaps as Wael Ghonim argues, a new approach to social media altogether. Regardless, we need to design the conversation spaces and actively engage in them lest we create a well-intentioned echo chamber that achieves collective nothing instead of collective impact.
Posted on January 27, 2016
Innovation might be doing things different to produce value, but there’s little value if we as a society are not able to embrace change because we’re hiding from mental illness either as individuals, organizations or communities. Without wellbeing and the space to acknowledge when we don’t have it any new product, idea or opportunity will be wasted, which is why mental health promotion is something that we all need to care about.
Today is Bell Let’s Talk Day in Canada. Its (probably) the most visible national day of mental health promotion in the world. The reason has much to do with the sponsor, Bell Canada, who happens to be one of the country’s major providers of wireless telecommunications, Internet, and television services in addition to owning many entertainment outlets like cable channels, sports teams and radio stations. But this is not about Bell**, but the issue behind Let’s Talk Day: ending mental health stigma.
Interestingly perhaps, the line from the film and novel Fight Club that is most remembered is also the one that is quite fitting for the topic of mental health (particularly given the story):
First rule of Fight Club: Don’t talk about Fight Club.
Mental health stigma is a vexing social problem because it’s about an issue that is so incredibly common and yet receives so little attention in the public discourse.
The Mental Health Commission of Canada‘s aggregation of the data provide a useful jumping off point:
- In any given year, one in five people in Canada experiences a mental health problem or illness bringing a cost to the economy of more than $50 billion;
- Up to 70 per cent of adults with a mental health problem report having had one in childhood;
- Mental health was the reason for nearly half of all disability-related claims by the Canadian public service in 2010, double what it was in 1990;
- Mental health problems and illnesses account for over $6 billion in lost productivity costs due to absenteeism and presenteeism;
- Among our First Nations, youth are 5-6 times more likely to die at their own hands than non-Aboriginal youth and for Inuit, the suicide rate is 11 times the national average;
- Improving a child’s mental health from moderate to high has been estimated to save society more than $140,000 over their lifetime;
And this is just Canada. Consider what it might look like where you live.
It seems preposterous that, with numbers this high and an issue so prevalent that it is not commonly spoken of, yet that is the case. Mental illness is still the great ‘secret’ in society and yet our mental wellbeing is critical to our success on this planet.
Like with many vexing problems, the place for change to start is by listening.
Mind over matter: Dr. Paul Antrobus
Last year one of the most incredible human beings I’ve ever — or will ever — meet passed away. Dr. Paul Antrobus was the man who introduced to me psychology and was the wisest person I’ve ever known. Paul was not only among the greatest psychologists who’ve ever lived (I say with no exaggeration) by means of his depth of knowledge of the field and his ability to practice it across cultures, but he was also someone who could embody what mental health was all about.
In 2005 Paul fell off the roof of his cottage and was left as a paraplegic, requiring ventilation to breathe. For nearly anyone this would have been devastating to their very being, yet for Paul he managed to retain his humour, compassion and intellect as well as sharp wit and engagement and put it on display soon after his accident. He demonstrated to me the power of the mind and consciousness over the body both in the classroom and, after the accident, in his wheelchair.
Paul lived a good life, by design. He surrounded himself with family and friends, built a career where he was challenged and stimulated and provided enough basics for life, and gave back to his community and to hundreds of students whom he mentored and taught. Much of this was threatened with the accident, yet he continued on, illustrating how much potential we have for healing. He learned by listening to his life what he needed and when he needed it, tried things out, evaluated, tinkered and persisted. In essence: he was a designer.
Paul would also be the first to say that healing is a product of many things — biology (like genes), personality, family upbringing, access to resources (human, financial, spiritual, intellectual), and community systems of support. He made the most of all of these and, partly because of his access to resources as part of being a professor of psychology, was able to cultivate positive and strong mental health while helping others do the same. Although he might not have used the term ‘designer’, that’s exactly what he was. One of the reasons was that he discovered how to listen to his life and that of others.
And because he was able to listen to others he recognized that nearly everyone had the potential for great health, but that such potential was always couched within systems that worked for or against people. Of all of the things that contribute to healing, a healing community had the potential to allow people to overcome nearly any problem associated with the other factors. Yet, it is the community — and their attitudes toward health (and mental health in particular) — that requires the greatest amount of change.
That’s why talking and listening is so important. It creates community.
Listening to your life
Paul wrote a book and taught a course on listening to one’s life. Part of that approach is also being able to share what your life is teaching you and listening to what your body and the world is telling you. For something like Bell Let’s Talk Day, a space is created to share — Tweet, text, post — stories of suffering, hope, recovery, support, love and questioning about mental health without fear. It’s a single day and part of a corporate-led campaign, but the size and scope of it make it far safer and ‘normal’ on this day than on almost any day I know of.
A couple of years ago a colleague disclosed to the world that she had struggled with depression via Twitter on Bell Let’s Talk Day. She was so taken by the chance to share something that, on any other day, would seem to be ‘oversharing’ or inappropriate or worse, that she opened up and, thankfully, many others listened.
Let’s Talk Day is about designing the conversation around mental health by creating the space for it to take place and allowing ideas and issues to emerge. This is the kind emergent conditions that systems change designers seek to create and if you want to see it in action, follow #BellLetsTalk online or find your own space to talk wherever you are and to listen and to design for one of the greatest social challenges of our time.
This post is not about innovation, but rather the very foundation in which innovation and discovery rests: our mental health and wellbeing. For without those, innovation is nothing.
Today, listen to your life and that of others and consider what design considerations are necessary to promote positive mental health and the creative conditions to excel and innovate.
As for some tips in speaking out and listening in, consider these five things to promote mental health where you are today:
- Language matters – pay attention to the words you use about mental illness.
- Educate yourself – learn, know and talk more, understand the signs of distress and mental illness.
- Be kind – small acts of kindness speak a lot.
- Listen and ask – sometimes it’s best to just listen.
- Talk about it – start a dialogue, break the silence
Thank you for listening.
** I have no affiliation with Bell or have any close friends or family who work for Bell (although they are my mobile phone provider, if that counts as a conflict of interest).
When we seek change the temptation is looking for ‘the key’ component of a problem or situation that, if changed, is expected to lead to profound transformation. Too frequently these type of solutions fail not because the change to the component is poor, but that the thinking is not aligned to the system that contributes to the problem in the first place and thus, changing thinking is what’s actually the key not the designed solution.
If you’re one of the millions of people who made a New Years resolution there is a very good chance that your resolve has already wavered if not been completely abandoned. Research shows that New Years resolutions simply don’t hold up. This is not because of lack of will or even lack of effort or thought, but because we often confuse changes in a part of the system (e.g., exercise and better diet) with changes in the system itself (overall better health and weight loss).
Travel might be the ultimate example of systems and change. For the scenario pictured above, having a better automobile does nothing to help navigate the streetscape. No amount of fuel efficiency, top speed, safety rating or performance tires are going to make an ounce of difference in traversing this space. The reason is that the transportation system is broken, not that the units within it are. Indeed, cars, bikes, rickshaws and foot all perform perfectly well in this environment as designed, yet are rendered disabled in this context, which was designed to facilitate, not hinder their use.
Collective impact, systems change?
The model of collective impact is one that recognizes the fallacies of assuming that organizations seeking transformative social change will do so on their own, independently through wise thought and action. Collective impact is a model that has been widely supported by organizations such as Tamarack as a means of building capacity for systems change, not just change in the system.
The concept of collective impact was first popularized by John Kania and Mark Kramer in an article in the Stanford Social Innovation Review. Collective impact is a specific set of strategies that align the following five qualities and brief summaries that follow:
- Common agenda (are organizations striving for the same things?)
- Shared measurement systems (are partners measuring the same things in the same ways to enable comparisons and combine data?)
- Mutually reinforcing activities (are initiatives building on one another, syncing up, and coherent?)
- Continuous communication (are partners ‘in the know’ about what is happening across the system as activities unfold? )
- Backbone support organizations (is there an organization or more that provides coordinating support and infrastructure to maintain the whole enterprise?)
The concept of coordinated action toward a common goal supported by shared means of assessment and feedback and ongoing communication is an enormous step forward in organizing actors involved in social change initiatives.
What is often missing from the discussion of collective impact is systems thinking. That is, explicit discussion of the way systems operate and not just discussion of the system itself that is to be changed. To be clear, there are many ways of doing collective impact and I mention Tamarack because they are among the few organizations that bring systems thinking into their work on systems change and collective impact. But it’s important to note that this is an exception, not the rule when reviewing what’s out there on collective impact. Many organizations do not (or may not) realize that thinking about systems change is not the same as systems thinking.
It is quite possible that we could see collective impact produce a larger-scale version of the flaws we see in initiatives aimed at changing components of the system if systems thinking isn’t considered integral to how its implemented. No amount of communication or shared measurement will help if we don’t measure the right things.
Systems thinking about collective impact
Social change is not doing the same thing that works at one scale (e.g. a person, family or team) and simply doing a lot more of it in more places. There are corollaries to be sure, but it’s not a linear pathway. Just as scaling a challenge and the response to it up produces potential for benefits, it also can scale harmful (or limiting) effects if the problem is not well-defined.
With that, let me pose some questions and challenges for those engaging in collective impact to help advance our shared understanding that are rooted in systems thinking?
- What is the problem that is aimed to be solved (and is it the real problem?) Have alternative viewpoints from a diversity of actors throughout the system been considered in light of their position within the system and the values, goals and aspirations of those seeking systems change?
- One of the ways that this diversity of perspectives is gained is through systems mapping. Systems mapping can be done through many different methods with each producing different looks at the dynamics, structures and relationships within the system. But what is shared is that it visualizes these qualities in a manner that makes it accessible to (almost) everyone. It allows for participants in the process to ask questions like: “why is [x] located so close to [y]?” or “where is [z] in all of this?” These create the kind of discussions that allow assumptions to emerge about the dynamics of the system itself.
- An important follow-up to this is tracking these issues and framing them as evaluation questions. This grounds some of the metrics and measures in the system itself, not just the activities that the participants in collective impact initiatives seek to perform. It can also recognize the limits of the organizations at the table and either better account for them or provide guidance on how to overcome them (e.g., recruit more or different partners).
- Systems maps are not only useful at the beginning, but can be an evaluative tool in itself. Maps developed at the start of an initiative and at different time points can enable partners to see what has changed and potentially find out how by examining how the structures, relationships and inclusion or exclusion of certain parts of the system shift over time. It may not allow for explicit causal attribution, but it can help understand and document what changed and initiate collective sense making about how that might have happened.
This is just a sample.
If we consider the traffic problem posed at the start of this post, one might find that the system problem isn’t even one related to the street, but to the larger community. One may find that the distances or locations of places to work, worship, shop and play are misaligned or that there are times of days when certain activities bring people into the street or perhaps its related to temperature (too hot, too cold) and the absence of climate control systems that work. More importantly however, systems thinking may enable us to account for all of these at the same time and avoid us focusing on one or two problems, but the system as a whole avoiding what is known as “a fix that fails”.