Decoding the change genome

11722018066_c29148186b_k

Would we invest in something if we had little hard data to suggest what we could expect to gain from that investment? This is often the case with social programs, yet its a domain that has resisted the kind of data-driven approaches to investment that we’ve seen in other sectors and one theory is that we can approach change in the same way we code the genome, but: is that a good idea?

Jason Saul is a maverick in social impact work and dresses the part: he’s wearing a suit. That’s not typically the uniform of those working in the social sector railing against the system, but that’s one of the many things that gets people talking about what he and his colleagues at Mission Measurement are trying to do. That mission is clear: bring the same detailed analysis of the factors involved in contributing to real impact from the known evidence that we would do to nearly any other area of investment.

The way to achieving this mission is to take the thinking behind the Music Genome Project, the algorithms that power the music service Pandora, and apply it to social impact. This is a big task and done by coding the known literature on social impact from across the vast spectrum of research from different disciplines, methods, theories and modeling techniques. A short video from Mission Measurement on this approach nicely outlines the thinking behind this way of looking at evaluation, measurement, and social impact.

Saul presented his vision for measurement and evaluation to a rapt audience in Toronto at the MaRS Discovery District on April 11th as part of their Global Leaders series en route to the Skoll World Forum ; this is a synopsis of what came from that presentation and it’s implications for social impact measurement.

(Re) Producing change

Saul began his presentation by pointing to an uncomfortable truth in social impact: We spread money around with good intention and little insight into actual change. He claims (no reference provided) that 2000 studies are published per day on behaviour change, yet there remains an absence of common metrics and measures within evaluation to detect change. One of the reasons is that social scientists, program leaders, and community advocates resist standardization making the claim that context matters too much to allow aggregation.

Saul isn’t denying that there is truth to the importance of context, but argues that it’s often used as an unreasonable barrier to leading evaluations with evidence. To this end, he’s right. For example, the data from psychology alone shows a poor track record of reproducibility, and thus offers much less to social change initiatives than is needed. As a professional evaluator and social scientist, I’m not often keen to being told how to do what I do, (but sometimes I benefit from it). That can be a barrier, but also it points to a problem: if the data shows how poorly it is replicated, then is following it a good idea in the first place? 

Are we doing things righter than we think or wronger than we know?

To this end, Saul is advocating a meta-evaluative perspective: linking together the studies from across the field by breaking down its components into something akin to a genome. By looking at the combination of components (the thinking goes) like we do in genetics we can start to see certain expressions of particular behaviour and related outcomes. If we knew these things in advance, we could potentially invest our energy and funds into programs that were much more likely to succeed. We also could rapidly scale and replicate programs that are successful by understanding the features that contribute to their fundamental design for change.

The epigenetic nature of change

Genetics is a complex thing. Even on matters where there is reasonably strong data connecting certain genetic traits to biological expression, there are few examples of genes as ‘destiny’as they are too often portrayed. In other words, it almost always depends on a number of things. In recent years the concept of epigenetics has risen in prominence to provide explanations of how genes get expressed and it has as much to do with what environmental conditions are present as it is the gene combinations themselves . McGill scientist Moshe Szyf and his colleagues pioneered research into how genes are suppressed, expressed and transformed through engagement with the natural world and thus helped create the field of epigenetics. Where we once thought genes were prescriptions for certain outcomes, we now know that it’s not that simple.

By approaching change as a genome, there is a risk that the metaphor can lead to false conclusions about the complexity of change. This is not to dismiss the valid arguments being made around poor data standardization, sharing, and research replication, but it calls into question how far the genome model can go with respect to social programs without breaking down. For evaluators looking at social impact, the opportunity is that we can systematically look at the factors that consistently produce change if we have appropriate comparisons. (That is a big if.)

Saul outlined many of the challenges that beset evaluation of social impact research including the ‘file-drawer effect’ and related publication bias, differences in measurement tools, and lack of (documented) fidelity of programs. Speaking on the matter in response to Saul’s presentation, Cathy Taylor from the Ontario Non-Profit Network, raised the challenge that comes when much of what is known about a program is not documented, but embodied in program staff and shared through exchanges.  The matter of tacit knowledge  and practice-based evidence is one that bedevils efforts to compare programs and many social programs are rich in context — people, places, things, interactions — that remain un-captured in any systematic way and it is that kind of data capture that is needed if we wish to understand the epigenetic nature of change.

Unlike Moshe Szyf and his fellow scientists working in labs, we can’t isolate, observe and track everything our participants do in the world in the service of – or support to – their programs, because they aren’t rats in a cage.

Systems thinking about change

One of the other criticisms of the model that Saul and his colleagues have developed is that it is rather reductionist in its expression. While there is ample consideration of contextual factors in his presentation of the model, the social impact genome is fundamentally based on reductionist approaches to understanding change. A reductionist approach to explaining social change has been derided by many working in social innovation and environmental science as outdated and inappropriate for understanding how change happens in complex social systems.

What is needed is synthesis and adaptation and a meta-model process, not a singular one.

Saul’s approach is not in opposition to this, but it does get a little foggy how the recombination of parts into wholes gets realized. This is where the practical implications of using the genome model start to break down. However, this isn’t a reason to give up on it, but an invitation to ask more questions and to start testing the model out more fulsomely. It’s also a call for systems scientists to get involved, just like they did with the human genome project, which has given us great understanding of what influences our genes have and stressed the importance of the environment and how we create or design healthy systems for humans and the living world.

At present, the genomic approach to change is largely theoretical backed with ongoing development and experiments but little outcome data. There is great promise that bigger and better data, better coding, and a systemic approach to looking at social investment will lead to better outcomes, but there is little actual data on whether this approach works, for whom, and under what conditions. That is to come. In the meantime, we are left with questions and opportunities.

Among the most salient of the opportunities is to use this to inspire greater questions about the comparability and coordination of data. Evaluations as ‘one-off’ bespoke products are not efficient…unless they are the only thing that we have available. Wise, responsible evaluators know when to borrow or adapt from others and when to create something unique. Regardless of what design and tools we use however, this calls for evaluators to share what they learn and for programs to build the evaluative thinking and reflective capacity within their organizations.

The future of evaluation is going to include this kind of thinking and modeling. Evaluators, social change leaders, grant makers and the public alike ignore this at their peril, which includes losing opportunities to make evaluation and social impact development more accountable, more dynamic and impactful.

Photo credit (main): Genome by Quinn Dombrowski used under Creative Commons License via Flickr. Thanks for sharing Quinn!

About the author: Cameron Norman is the Principal of Cense Research + Design and assists organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.

2 thoughts on “Decoding the change genome”

  1. Thanks so much for this summary Cameron. I was following along on Twitter yesterday, and felt my back going up for some of the same reasons you articulate above. I am struggling a bit to imagine how this might work without an example. Not sure if you have one or if they provided one?

    1. Thanks for the comment on the post. There wasn’t any specific examples given, which is one of the issues. This is all currently at a “good idea” / “wouldn’t it be amazing if” stage and is still a proof-of-concept to me. It’s an intriguing idea, I just don’t think it will deliver what some think it will. Nonetheless, there is some need for fresh thinking in evaluation and the social sector and there are bits of this approach that I think have value, particularly the coding of the literature. Mind you, the problem with the coding is finding the “like vs like” in the data given that some of the measures and metrics are theory-driven, some data driven, and some neither and all done to different degrees of fidelity. It will be interesting to see what comes from the data generated by this once it is really launched.

Comments are closed.

Discover more from Censemaking

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top