The more we get together

8145864985_81d273fea5_k

As we forge ever-greater connections online to each other and the world of ideas the thinking was that we would be far better off, more tolerant, educated and wise and yet there is much evidence to suggest this isn’t the case. What does it mean to come together and how can we do this that brings us closer rather than driving us further apart? 

The more we get together, the happier we’ll be – lyric from popular song for children

Like many, I’ve grown up thinking this very thing and, for the most part, my experience has shown this to be true. However upon reflection, I’m realizing that most of this experience is related to two things that could reveal a potential flaw in my thinking: 1) I’m thinking of face-to-face encounters with others more than any other type and also 2) most of the relationships I’ve formed without aid of or post use-of the Internet.

Face-to-face interactions of any real quality are limited in nature. We only have so many hours in a day and, unless your job is extremely social or you live in a highly communal household complex, we’re unlikely to have much interaction with more than a few dozen people per day that extends beyond “hello” or something like that. This was explored in greater detail by anthropologist Robin Dunbar, who determined that our social networks are usually capped at 100 – 250 individuals. Dunbar’s number (the commonly held mean number of people in these networks) is commonly considered to be 150.

Why does this matter? When we engage others online, the type of interactions and the number of ideas we engage can be far larger, or at least is certainly different in how those relationships are managed. We see comments on discussion boards, social media posts, videos and pictures shared online, and are exposed to media messages of all types and through myriad news (official, professional and otherwise) sources. Ethan Zuckerman, who I’ve written about before, has written extensively about the paradox of having such incredible access to diversity in the world and yet we often find ourselves increasingly insular in our communication patterns, choosing like-minded opinions over alternative ones.

Looking ahead by looking back at Marshall McLuhan

Journalist Nicholas Carr, who’s written extensively on the social context of technology, recently posted an interview with Marshall McLuhan from 1977 speaking on his views about where media was going and his idea of “the global village”. His piece, the global village of violence, was enlightening to say the least. In it, Carr points to the violence we are committing in this global village and how it doesn’t square with what many thought were the logical outcomes of us connecting — and does so by pointing back to McLuhan’s own thoughts.

McLuhan’s work is often a complicated mess, partly because there is a large, diverse and scattered academic culture developed around his work and thus, often the original points he raised can get lost in what came afterwards. The cautions he had around hyper-connection through media are one of those things. McLuhan didn’t consider the global village to be an inherently good thing, indeed he spoke about how technology at first serves and then partly controls us as it becomes normalized part of everyday life — the extension becomes a part of us.

As is often the case with McLuhan, looking back on what he said, when he said it and what it might mean for the present day is instructive for helping us do, just as his seminal work sought to help us do, understand media and society. Citing McLuhan, Nicholas Carr remarked that:

Instantaneous, universal communication is at least as likely to breed nationalism, xenophobia, and cultism as it is to breed harmony and fellow-feeling, McLuhan argues. As media dissolve individual identity, people rush to join “little groups” as a way to reestablish a sense of themselves, and they’ll go to extremes to defend their group identity, sometimes twisting the medium to their ends

Electronic media, physical realities

These ‘little groups’ are not always so little and they certainly aren’t weak. As we are seeing with Donald Trump‘s ability to rally a small, but not insignificant population in the United States to join him despite his litany of abusive, sexist, inflammatory, racist, discriminatory and outwardly false statements has been constantly underestimated. Last week’s horrible mass shooting in Orlando brought a confluence of groups into the spotlight ranging from anti-Muslim, both anti-gay and gay rights, pro-gun, along with Republican and Democratic supporters of different issues within this matter, each arguing with intensity and too often speaking past each other. Later this week we saw British MP Jo Cox murdered by someone who saw her as a traitor to Britain, presumably on account of her position on the pending ‘Brexit’ vote (although we don’t yet know the motivation of the killer).

 

There are many reasons for these events and only some that we will truly know, but each matter points to an inability to live with, understand and tolerate others’ viewpoints and extreme reactions to them. The vitriol of debate on matters in the public sphere is being blamed for some of these reactions, galvanizing some to do horrible things. Could it be that our diversity, the abundance of interactions we have and the opportunities to engage or disengage selectively

If this hypothesis holds, what then? Should we start walling off ourselves? No. But nor should we expect to bring everyone together to share the tent and expect it to go well without very deliberate, persistent, cultivation and management of relationships, collectively. Much like a gardener does with her garden, there’s a need to keep certain things growing, certain things mixing, certain things out and others in and these elements might be different depending on the time of year, season, and plants being tended to. Just as there is no ‘one garden’ style that fits everywhere, there is no one way to do ‘culture’, but some key principles and a commitment to ongoing attention and care that feed healthy cultures (that include diversity).

As odd as this may sound, perhaps we need to consider doing the kind of civic development work that can yield healthy communities online as well as off. We certainly need better research to help us understand what it means to engage in different spaces, what types of diversity work well and under what conditions, and to help us determine what those ‘simple rules’ might be for bring us closer together so, like the childrens song above, we can be happier rather than what we’ve been becoming.

Complexity isn’t going away and is only increasing and unless we are actively involved in cultivating and nurturing those emergent properties that are positive and healthy and doing it by design, and viewing our overlapping cultures as complex adaptive systems (and creating the policies and programs that fit those systems), we put ourselves at greater risk for letting those things emerge that drive us further apart than bring us together.

 

Photo credit: Connections by deargdoom57 used under Creative Commons License via Flickr. Thanks deargdoom57 for sharing your work!

 

Authentic baloney and other sincere problems

13959919887_9cb49f2b90_k

What does it mean to be authentic in an age of design and complex social systems? It’s not as simple as you think and, as two high-profile psychologists point out, not something that’s easily agreed upon, either. 

Over the past week, two high-profile psychologists and authors Adam Grant and Brene Brown have been engaged in a “debate” (or public disagreement? argument? — it’s hard to really tell) over the concept of authenticity and the role it plays in life — professional, personal and otherwise.

The debate was started by an op-ed post in the New York Times written by Grant who starts by referencing a description of Authenticity used by Brown in her work:

We are in the Age of Authenticity, where “be yourself” is the defining advice in life, love and career. Authenticity means erasing the gap between what you firmly believe inside and what you reveal to the outside world. As Brené Brown, a research professor at the University of Houston, defines it, authenticity is “the choice to let our true selves be seen.”

Brown, reacting to this piece on LinkedIn, corrects Grant by offering a better definition she’s used and criticizing his narrow-framed perspective on what authenticity is, which she states as:

In my research I found that the core of authenticity is the courage to be imperfect, vulnerable, and to set boundaries.

For Grant, it’s about dropping the filters and saying what’s on your mind all the time, while for Brown it’s about embracing vulnerability. The two are not the same thing, but nor are they opposites or incompatible with authenticity, rather they point to the problems of creating firm positions in complex systems.

A matter of boundaries

Brown’s definition adds something Grant’s interpretation leaves out: boundaries. It’s how we draw the boundaries around what we’re doing, and how and for what effect that determine the appropriateness of filters, expression and vulnerability. It’s also about context. Grant’s argument tends to be the one-sized-fits all with the kind of blanket statements about what he believes others want and need to hear. In his Times article, he ends with this pronouncement for readers:

Next time people say, “just be yourself,” stop them in their tracks. No one wants to hear everything that’s in your head. They just want you to live up to what comes out of your mouth.

That Grant was so quick to equate authenticity with no-filtered thinking is somewhat surprising given his background in psychology. It shows a remarkably simplistic view of human psychology that isn’t befitting his other work. Yet, he’s managed to not only publish this piece in the Times, but doubled-down on the argument in a follow-up post also on LinkedIn. In that piece, he again equates authenticity with a sense of absoluteness around always saying what’s on your mind by drawing on research that looks at self-monitoring and expressiveness.

Here are some of the items—you can answer them true or false:

  • My behavior is usually an expression of my true inner feelings, attitudes, and beliefs.
  • I would not change my opinions (or the way I do things) in order to please someone else or win their favor.
  • I’m always the person I appear to be.

People who answer true are perceived as highly authentic—they know and express their genuine selves. And a rigorous analysis of all 136 studies shows that these authentic people receive significantly lower performance evaluations and are significantly less likely to get promoted into leadership roles.

In some fairness, Brown’s work can be easily muddled when it comes to the matter of boundaries. While she’s responded very clearly to his comments and work, there’s been a lot of slippage between boundaries in her work. Anyone who has read her books and seen her talks knows that Brown models the embrace of vulnerability by drawing on her own personal challenges with being authentic and valuing herself, illustrating points from her research with examples from her own human struggles. Yet, I recall reading her books Daring Greatly and The Gifts of Imperfection thinking to myself the stories often stumbled from being instructive, supportive and healthy examples of vulnerability to feeling like I was being used as a platform for supporting her self-development, rather than to learn from her.

For me, this was less about any one particular story of her being vulnerable, but the cumulative effect of these stories coming together as told through a book. It was the volume not content of the stories that shifted my perception. By the time I finished I felt like I’d been witness to Brown’s self therapy, which weakened my perception of her being authentic.

This cumulative effect is partly what Grant is referring to when citing work on self-monitoring. He’s not commenting on moments of vulnerability, rather it’s on creating a presentation of personhood that lacks a sense of boundaries.

The answer to authenticity might be in that complex middle space. If Brown is open to and eager to share her vulnerabilities it’s important that I as a listener be willing (and able and prepared) to welcome in that discussion. But what if I am not? In Grant’s demarcation of boundaries that might not matter, but then we end up with a set of rules based on his (and many others) view of authenticity, which can devolve into something that Brown connects to a traditional, stereotyped ‘male’ expression of authenticity:

Many of the behaviors that Grant associates with authenticity don’t reflect the courage to be imperfect, vulnerable, or to set boundaries. They actually reflect crude, negative gender stereotypes. Male authenticity is associated with being hurtful, arrogant, manipulative, overbearing, and, in plain speak, an asshole. (italics added)

We must not stop listening, but we also must be cautious in how much (and when and in what context) we share and tell. Too little and we simply replicate the power positions of the past and surrender our true selves to social norms. Too much or done poorly and we might get a little closer to where Grant is.

Authentic baloney

What is authentic baloney (or Bologna Sausage for it’s original name)? Baloney is a indeed a thing, but it’s also a fake, synthetic meat product all at the same time. It’s a prepared meat that is designed to combine various ingredients together in a particular way that doesn’t really fit in any other types of sausage, yet is still ‘sausage like’. It’s difficult to describe using the language of sausage, yet also doesn’t have another peer to compare to (except Spam, which is a similar strange version of something familiar).

It is, in a sense, an authentic artificial product.

These two things — authenticity and artificiality — can coexist. Herb Simon wrote about design being partly about the science of the artificial. Stating in his book of the same name:

Engineering, medicine, business, architecture and painting are concerned not with the necessary but with the contingent – not with how things are but with how they might be – in short, with design.

Design is about what could be. Authentic is about what is and what could be, speaking about intention as well as reflection on what one believes and wishes to project to others. Baloney is just that. It’s a manifestation of a design of a meat product that is intended to reflect how a meat product might be when one combines some of the less sought after cuts of meat together with spices, herbs and fats. It’s not real meat, but it’s not fake either.

What is our authentic self?

Our authentic self is changing. If one believes we come into the world and grow into a form, then who we are as a child is largely deterministic for what comes afterward.

It’s interesting that this ding-dong on authenticity from Brown and Grant come when my colleague Mark Kuznicki from The Moment published a long, extensive and revealing piece into the process that his firm engages in to recalibrate and strategically plan its future. Taking Grant’s view, this level of openness in discussing the challenges and opportunities could quite easily be construed as over-sharing self-monitoring. Brown might argue that this kind of public self-reflection indicates a reflection of that organization’s true self. I think it’s both and neither.

Authenticity is very much like baloney, which takes many forms, has different cultural interpretations and expressions and levels of acceptance and quality within it. What makes for good baloney really does depend on a great many factors and the person who’s consuming it. Just like baloney, what gets lost in these arguments is position within the system.

Systems perspectives are partly about understanding where one is positioned in them, which determines what is seen, how something is perceived, what kind of information is available and, most importantly, the meaning that is attached to that information in order to assess what to do and what impact it might have.

Part of that perspective is time.

A developmental perspective

My authentic self is not the same as it once was. Part of that is because at various stages of life I was more (early childhood) or less (teen and young adult years) comfortable with expressing that authenticity. But interestingly, as I got older, what was truly authentic was becoming more complicated and harder to assess. It’s because I’ve become far more complicated and with experience, knowledge and the accumulation of both I’ve transformed that original person into someone different (and also very similar).

To provoke developmental thinking I often ask students or audiences the question: Is a 40-year-old an 8 times better 5-year-old? Is a person who was five and said: I want to be a princess / astronaut / firefighter and ends up being a senior policy advisor for the government, an accountant, a social worker or designer just someone who failed at their goals?

Are these even relevant questions? The answer is: no. I once wanted to be a firefighter, but now I can’t imagine doing that job. Why did that change? Because I developed into something different. My authentic self sought different challenges, opportunities and required other things to nurture itself. I still love to draw, doodle and play sports, just like I did when I was five. That part of me, too is authentic.

As authenticity becomes more of a fashionable word and thrown out for use in many contexts it is worth considering more about what it is, what it means, and how we really nurture it in our work. As I think both Brene Brown and Adam Grant would agree: Authenticity is too important to fake, lest it become baloney.

 

 

Photo credit: Untitled by themostinept used under Creative Commons License via Flickr.

 

 

Reflections said, not done

27189389172_cb3f195829_k

 

Reflective practice is the cornerstone of developmental evaluation and organizational learning and yet is one of the least discussed (and poorly supported) aspects of these processes. It’s time to reflect a little on reflection itself. 

The term reflective practice was popularized by the work of Donald Schön in his book The Reflective Practitioner, although the concept of reflecting while doing things was discussed by Aristotle and serves as the foundation for what we now call praxis. Nonetheless, what made reflective practice as a formal term different from others was that it spoke to a deliberative process of reflection that was designed to meet specific developmental goals and capacities. While many professionals had been doing this, Schön created a framework for understanding how it could be done — and why it was important — in professional settings as a matter of enhancing learning and improving innovation potential.

From individual learners to learning organizations

As the book title suggests, the focus of Schön’s work was on the practitioner her/himself. By cultivating a focus, a mindset and a skill set in looking at practice-in-context Schön (and those that have built on his work) suggest that professionals can enhance their capacity to perform and learning as they go through a series of habits and regular practices by critically inquiring about their work as they work.

This approach has many similarities to mindfulness in action or organizational mindfulness, contemplative inquiry, and engaged scholarship among others. But, aside from organizational mindfulness, these aforementioned approaches are designed principally to support individuals learning about and reflecting about their work.

There’s little question that paying attention and reflecting on what is being done has value for someone seeking to improve the quality of their work and its potential impact, but it’s not enough, at least in practice (even if it does in theory). And the evidence can be found in the astonishing absence of examples of sustained change initiatives supported by reflective practice and, more particularly, developmental evaluation, which is an approach for bringing reflection to bear on the way we evolve programs over time. This is not a criticism of reflective practice or developmental evaluation per se, but the problems that many have in implementing it in a sustained manner. From professional experience, this comes down largely to the matter of what is required to actually do reflective practice or any in practice. 

For developmental evaluation it means connecting what it can do to what people actually will do.

Same theories, different practices

The flaw in all of this is that the implementation of developmental evaluation is often predicated on implicit assumptions about learning, how it’s done, who’s responsible for it, and what it’s intended to achieve. The review of the founding works of developmental evaluation (DE) by Patton and others point to practices and questions that that can support DE work.

While enormously useful, they make the (reasonable) assumption that organizations are in a position to adopt them. What is worth considering for any organization looking to build DE into their work is: are we really ready to reflect in action? Do we do it now? And if we don’t, what makes us think we’ll do it in the future? 

In my practice, I continually meet organizations that want to use DE, be innovative, become adaptive, learn more deeply from what they do and yet when we speak about what they currently do to support this in everyday practice few examples are presented. The reason is largely due to time and the priorities and organization of our practice in relation to time. Time — and its felt sense of scarcity for many of us — is one of the substantive limits and reflective practice requires time.

The other is space. Are there places for reflection on issues that matter that are accessible? These twin examples have been touched on in other posts, but they speak to the limits of DE in affecting change without the ability to build reflection into practice. Thus, the theory of DE is sound, but the practice of it is tied to the ability to use time and space to support the necessary reflection and sensemaking to make it work.

The architecture of reflection

If we are to derive the benefits from DE and innovate more fully, reflective practice is critical for without one we can’t have the other. This means designing in reflective space and time into our organizations ahead of undertaking a developmental evaluation. This invites questions about where and how we work in space (physical and virtual) and how we spend our time.

To architect reflection into our practice, consider some questions or areas of focus:

  • Are there spaces for quiet contemplation free of stimulation available to you? This might mean a screen-free environment, a quiet space and one that is away from traffic.
  • Is there organizational support for ‘unplugging’ in daily practice? This would mean turning off email, phones and other electronic devices’ notifications to support focused attention on something. And, within that space, are there encouragements to use that quiet time to focus on looking at and thinking about evaluation data and reflecting on it?
  • Are there spaces and times for these practices to be shared and done collectively or in small groups?
  • If we are not granting ourselves time to do this, what are we spending the time doing and does it add more value than what we can gain from learning?
  • Sometimes off-site trips and scheduled days away from an office are helpful by giving people other spaces to reflect and work.
  • Can you (will you?) build in — structurally — to scheduled work times and flows committed times to reflect-in-action and ensure that this is done at regular intervals, not periodic ones?
  • If our current spaces are insufficient to support reflection, are we prepared to redesign them or even move?

These are starting questions and hard ones to ask, but they can mean the difference between reflection in theory and reflection in practice which is the difference between innovating, adapting and thriving in practice, not just theory or aspiration.

 

 

 

 

Innovation Framing

6085347780_1a6eba300c_b

Innovation is easier to say than to do. One of the reasons is that a new idea needs to fit within a mindset or frame that is accustomed to seeing the way things are, not what they could be, and its in changing this frame that innovators might find their greatest obstacles and opportunities. 

Innovation, its creation and distribution is a considerable challenge to take up when the world is faced with so many problems related to the way we do things. The need to change what we do and how we live was brought into stark view this week as reports came out suggesting that April was the hottest month in history, marking the third month in a row that a record has been beaten by a large margin.

If we are to mitigate or mediate the effects of climate change we will need to innovate on matters of technology, social and economic policy, bioscience, education and conservation….and fast and on a planetary scale that we’ve never seen before.

In the case of climate change we are seeing the world and the causes and consequences  posed by it through a frame. A frame is defined as:

frame |frām| noun

1) a rigid structure that surrounds or encloses something such as a door or window, 2) [ usu. in sing. ] a basic structure that underlies or supports a system, concept, or text: the establishment of conditions provides a frame for interpretation.

When discussing innovation we often draw upon both of these definitions of a frame — both a rigid, enclosing structure and something that supports our understanding of a system. Terms like rigidity can imply strength, but it also resists change.

Missing the boat for the sea

If we continually look at the sea we may assume it’s always the same and fail to notice the boat that can take us across and through it. In a recent interview with the Atlantic magazine, journalist Tom Vanderbilt discusses how we can miss new opportunities because we feel we know what we like already, much like the kid who doesn’t want to eat a vegetable she’s never even tasted before. Vanderbilt hits on something critical: the absence of language to covey what the ‘new’ is:

I think often we really are lacking the language, and the ways to frame it. If you look at films like Blade Runner or The Big Lebowski, when these films came out they were box office disasters. I think part of that was a categorization thing—not knowing how to think about it in the right way. Blade Runner didn’t really match up with the existing tropes of science fiction, Big Lebowski was just kind of strange

Today, both Blade Runner and The Big Lebowski are hailed as classics — only after the fact. It’s very much like the Apple Newton in the 1980’s failing more than 20 years before the iPad arrived even though it was a decent product.

Believing to see

A traditional evidence-based approach to change is that you must see it to believe it. In innovation, we often need to believe in order to see.  This is particularly true in complex contexts where the linkages between cause-and-effect with evidence are less obviously made.

However, it’s more than about belief in evidence, it’s belief in possibility. It is for this reason that foresight can make such an important contribution to the innovation process. Strategic foresight can provide an imaginative, yet data-supported way of envisioning possible futures, outcomes and circumstances. It is a means of enabling us to see future states in possibility, which enable us to better ensure that we are ready to see the present when it comes.

This is part of the thinking behind training exercises, particularly obvious in sports. A team might imagine a number of scenarios, which may not happen as outlined during a game, but because the team has imagined certain things to be possible, there is an opportunity to have rehearsed or anticipated ways to deal with what comes up in reality and thus helps them to believe something enough to see it when it comes.

Spending time envisioning possible futures, whether through a deliberative process like strategic foresight, or simply allowing yourself time to notice trends and possibilities and how they might connect can be a means of imagining possibilities and preparing you to meet them (or create them) sometime down the road.

Do so gives you the power to select what frame fits what picture.

 

For more information on strategic foresight check out the library section on this blog. If you need help doing it, contact Cense Research + Design.

Photo credit: Innovation by Boegh used under Creative Commons License.

 

 

Confusing change-making with actual change

658beggar_KeepCoinsChange

Change-making is the process of transformation and not to be confused with the transformed outcome that results from such a process. We confuse the two at our peril.

“We are changing the world” is a rallying cry from many individuals and organizations working in social innovation and entrepreneurship which is both a truth and untruth at the same time. Saying you’re changing the world is far easier than actually doing it. One is dramatic — the kind that make for great reality TV as we’ll discuss — and the other is rather dull, plodding and incremental. But it may be the latter that really wins the day.

Organizations like Ashoka (and others) promote themselves as a change-maker organization authoring blogs titled “everything you need to know about change-making”. That kind of language, while attractive and potentially inspiring to diverse audiences, points to a mindset that views social change in relatively simple, linear terms. This line of thinking suggests change is about having the right knowledge and the right plan and the ability to pull it together and execute.

This is a mindset that highlights great people and great acts supported by great plans and processes. I’m not here to dismiss the work that groups like Ashoka do, but to ask questions about whether the recipe approach is all that’s needed. Is it really that simple?

Lies like: “It’s calories in, calories out”

Too often social change is viewed with the same flawed perspective that weight loss is. Just stop eating so much food (and the right stuff) and exercise and you’ll be fine — calories in and out as the quote suggests — and you’re fine. The reality is, it isn’t that simple.

A heartbreaking and enlightening piece in the New York Times profiled the lives and struggles of past winners of the reality show The Biggest Loser (in parallel with a new study released on this group of people (PDF)) that showed that all but one of the contestants regained weight after the show as illustrated below:

BiggestLoser 2016-05-03 09.17.10

The original study, published in the journal Obesity, considers the role of metabolic adaptation that takes place with the authors suggesting that a person’s metabolism makes a proportional response to compensate for the wide fluctuations in weight to return contestants to their original pre-show weight.

Consider that during the show these contestants were constantly monitored, given world-class nutritional and exercise supports, had tens of thousands of people cheering them on and also had a cash prize to vie for. This was as good as it was going to get for anyone wanting to lose weight shy of surgical options (which have their own problems).

Besides being disheartening to everyone who is struggling with obesity, the paper illuminates the inner workings of our body and reveals it to be a complex adaptive system rather than the simple one that we commonly envision when embarking on a new diet or fitness regime. Might social change be the same?

We can do more and we often do

I’m fond of saying that we often do less than we think and more than we know.

That means we tend to expect that our intentions and efforts to make change produce the results that we seek directly and because of our involvement. In short, we treat social change as a straightforward process. While that is sometimes true, rare is it that programs aiming at social change coming close to achieving their stated systems goals (“changing the world”) or anything close to it.

This is likely the case for a number of reasons:

  • Funders often require clear goals and targets for programs in advance and fund based on promises to achieve these results;
  • These kind of results are also the ones that are attractive to outside audiences such as donors, partners, academics, and the public at large (X problem solved! Y number of people served! Z thousand actions taken!), but may not fully articulate the depth and context to which such actions produce real change;
  • Promising results to stakeholders and funders suggests that a program is operating in a simple or complicated system, rather than a complex one (which is rarely, if ever the case with social change);
  • Because program teams know these promised outcomes don’t fit with their system they cherry-pick the simplest measures that might be achievable, but may also be the least meaningful in terms of social change.
  • Programs will often further choose to emphasize those areas within the complex system that have embedded ordered (or simple) systems in them to show effect, rather than look at the bigger aims.

The process of change that comes from healthy change-making can be transformative for the change-maker themselves, yet not yield much in the way of tangible outcomes related to the initial charge. The reasons likely have to do with the compensatory behaviours of the system — akin to social metabolic adaptation — subduing the efforts we make and the initial gains we might experience.

Yet, we do more at the same time. Danny Cahill, one of the contestants profiled in the story for the New York Times, spoke about how the lesson learned from his post-show weight gain was that the original weight gain wasn’t his fault in the first place

“That shame that was on my shoulders went off”

What he’s doing is adapting his plan, his goals and working differently to rethink what he can do, what’s possible and what is yet to be discovered. This is the approach that we take when we use developmental evaluation; we adapt, evolve and re-design based on the evidence while continually exploring ways to get to where we want to go.

A marathon, not a sprint, in a laboratory

The Biggest Loser is a sprint: all of the change work compressed into a short period of time. It’s a lab experiment, but as we know what happens in a laboratory doesn’t always translate directly into the world outside its walls because the constraints have changed. As the show’s attending physician, Dr. Robert Huizenga, told the New York Times:

“Unfortunately, many contestants are unable to find or afford adequate ongoing support with exercise doctors, psychologists, sleep specialists, and trainers — and that’s something we all need to work hard to change”

This quote illustrates the fallacy of real-world change initiatives and exposes some of the problems we see with many of the organizations who claim to have the knowledge about how to change the world. Have these organizations or funders gone back to see what they’ve done or what’s left after all the initial funding and resources were pulled? This is not just a public, private or non-profit problem: it’s everywhere.

I have a colleague who spent much time working with someone who “was hired to clean up the messes that [large, internationally recognized social change & design firm] left behind” because the original, press-grabbing solution actually failed in the long run. And the failure wasn’t in the lack of success, but the lack of learning because that firm and the funders were off to another project. Without building local capacity for change and a sustained, long-term marathon mindset (vs. the sprint) we are setting ourselves up for failure. Without that mindset, lack of success may truly be a failure because there is no capacity to learn and act based on that learning. Otherwise, the learning is just a part of an experimental approach consistent with an innovation laboratory. The latter is a positive, the former, not so much.

Part of the laboratory approach to change is that labs — real research labs — focus on radical, expansive, long-term and persistent incrementalism. Now that might sound dull and unsexy (which is why few seem to follow it in the social innovation lab space), but it’s how change — big change — happens. The key is not in thinking small, but thinking long-term by linking small changes together persistently. To illustrate, consider the weight gain conundrum as posed by obesity researcher Dr. Michael Rosenbaum in speaking to the Times:

“We eat about 900,000 to a million calories a year, and burn them all except those annoying 3,000 to 5,000 calories that result in an average annual weight gain of about one to two pounds,” he said. “These very small differences between intake and output average out to only about 10 to 20 calories per day — less than one Starburst candy — but the cumulative consequences over time can be devastating.”

Building a marathon laboratory

Marathoners are guided by a strange combination of urgency, persistence and patience. When you run 26 miles (42 km) there’s no sprinting if you want to finish the same day you started. The urgency is what pushes runners to give just a little more at specific times to improve their standing and win. Persistence is the repetition of a small number of key things (simple rules in a complex system) that keep the gains coming and the adaptations consistent. Patience is knowing that there are few radical changes that will positively impact the race, just a lot of modifications and hard work over time.

Real laboratories seek to learn a lot, simply and consistently and apply the lessons from one experiment to the next to extend knowledge, confirm findings, and explore new territory.

Marathons aren’t as fun to watch as the 100m sprint in competitive athletics and lab work is far less sexy than the mythical ‘eureka’ moments of ‘discovery’ that get promoted, but that’s what changes the world. The key is to build organizations that support this. It means recognizing learning and that it comes from poor outcomes as well as positive ones. It encourages asking questions, being persistent and not resting on laurels. It also means avoiding getting drawn into being ‘sexy’ and ‘newsworthy’ and instead focusing on the small, but important things that make the news possible in the first place.

Doing that might not be as sweet as a Starburst candy, but it might avoid us having to eat it.

 

 

 

Decoding the change genome

11722018066_c29148186b_k

Would we invest in something if we had little hard data to suggest what we could expect to gain from that investment? This is often the case with social programs, yet its a domain that has resisted the kind of data-driven approaches to investment that we’ve seen in other sectors and one theory is that we can approach change in the same way we code the genome, but: is that a good idea?

Jason Saul is a maverick in social impact work and dresses the part: he’s wearing a suit. That’s not typically the uniform of those working in the social sector railing against the system, but that’s one of the many things that gets people talking about what he and his colleagues at Mission Measurement are trying to do. That mission is clear: bring the same detailed analysis of the factors involved in contributing to real impact from the known evidence that we would do to nearly any other area of investment.

The way to achieving this mission is to take the thinking behind the Music Genome Project, the algorithms that power the music service Pandora, and apply it to social impact. This is a big task and done by coding the known literature on social impact from across the vast spectrum of research from different disciplines, methods, theories and modeling techniques. A short video from Mission Measurement on this approach nicely outlines the thinking behind this way of looking at evaluation, measurement, and social impact.

Saul presented his vision for measurement and evaluation to a rapt audience in Toronto at the MaRS Discovery District on April 11th as part of their Global Leaders series en route to the Skoll World Forum ; this is a synopsis of what came from that presentation and it’s implications for social impact measurement.

(Re) Producing change

Saul began his presentation by pointing to an uncomfortable truth in social impact: We spread money around with good intention and little insight into actual change. He claims (no reference provided) that 2000 studies are published per day on behaviour change, yet there remains an absence of common metrics and measures within evaluation to detect change. One of the reasons is that social scientists, program leaders, and community advocates resist standardization making the claim that context matters too much to allow aggregation.

Saul isn’t denying that there is truth to the importance of context, but argues that it’s often used as an unreasonable barrier to leading evaluations with evidence. To this end, he’s right. For example, the data from psychology alone shows a poor track record of reproducibility, and thus offers much less to social change initiatives than is needed. As a professional evaluator and social scientist, I’m not often keen to being told how to do what I do, (but sometimes I benefit from it). That can be a barrier, but also it points to a problem: if the data shows how poorly it is replicated, then is following it a good idea in the first place? 

Are we doing things righter than we think or wronger than we know?

To this end, Saul is advocating a meta-evaluative perspective: linking together the studies from across the field by breaking down its components into something akin to a genome. By looking at the combination of components (the thinking goes) like we do in genetics we can start to see certain expressions of particular behaviour and related outcomes. If we knew these things in advance, we could potentially invest our energy and funds into programs that were much more likely to succeed. We also could rapidly scale and replicate programs that are successful by understanding the features that contribute to their fundamental design for change.

The epigenetic nature of change

Genetics is a complex thing. Even on matters where there is reasonably strong data connecting certain genetic traits to biological expression, there are few examples of genes as ‘destiny’as they are too often portrayed. In other words, it almost always depends on a number of things. In recent years the concept of epigenetics has risen in prominence to provide explanations of how genes get expressed and it has as much to do with what environmental conditions are present as it is the gene combinations themselves . McGill scientist Moshe Szyf and his colleagues pioneered research into how genes are suppressed, expressed and transformed through engagement with the natural world and thus helped create the field of epigenetics. Where we once thought genes were prescriptions for certain outcomes, we now know that it’s not that simple.

By approaching change as a genome, there is a risk that the metaphor can lead to false conclusions about the complexity of change. This is not to dismiss the valid arguments being made around poor data standardization, sharing, and research replication, but it calls into question how far the genome model can go with respect to social programs without breaking down. For evaluators looking at social impact, the opportunity is that we can systematically look at the factors that consistently produce change if we have appropriate comparisons. (That is a big if.)

Saul outlined many of the challenges that beset evaluation of social impact research including the ‘file-drawer effect’ and related publication bias, differences in measurement tools, and lack of (documented) fidelity of programs. Speaking on the matter in response to Saul’s presentation, Cathy Taylor from the Ontario Non-Profit Network, raised the challenge that comes when much of what is known about a program is not documented, but embodied in program staff and shared through exchanges.  The matter of tacit knowledge  and practice-based evidence is one that bedevils efforts to compare programs and many social programs are rich in context — people, places, things, interactions — that remain un-captured in any systematic way and it is that kind of data capture that is needed if we wish to understand the epigenetic nature of change.

Unlike Moshe Szyf and his fellow scientists working in labs, we can’t isolate, observe and track everything our participants do in the world in the service of – or support to – their programs, because they aren’t rats in a cage.

Systems thinking about change

One of the other criticisms of the model that Saul and his colleagues have developed is that it is rather reductionist in its expression. While there is ample consideration of contextual factors in his presentation of the model, the social impact genome is fundamentally based on reductionist approaches to understanding change. A reductionist approach to explaining social change has been derided by many working in social innovation and environmental science as outdated and inappropriate for understanding how change happens in complex social systems.

What is needed is synthesis and adaptation and a meta-model process, not a singular one.

Saul’s approach is not in opposition to this, but it does get a little foggy how the recombination of parts into wholes gets realized. This is where the practical implications of using the genome model start to break down. However, this isn’t a reason to give up on it, but an invitation to ask more questions and to start testing the model out more fulsomely. It’s also a call for systems scientists to get involved, just like they did with the human genome project, which has given us great understanding of what influences our genes have and stressed the importance of the environment and how we create or design healthy systems for humans and the living world.

At present, the genomic approach to change is largely theoretical backed with ongoing development and experiments but little outcome data. There is great promise that bigger and better data, better coding, and a systemic approach to looking at social investment will lead to better outcomes, but there is little actual data on whether this approach works, for whom, and under what conditions. That is to come. In the meantime, we are left with questions and opportunities.

Among the most salient of the opportunities is to use this to inspire greater questions about the comparability and coordination of data. Evaluations as ‘one-off’ bespoke products are not efficient…unless they are the only thing that we have available. Wise, responsible evaluators know when to borrow or adapt from others and when to create something unique. Regardless of what design and tools we use however, this calls for evaluators to share what they learn and for programs to build the evaluative thinking and reflective capacity within their organizations.

The future of evaluation is going to include this kind of thinking and modeling. Evaluators, social change leaders, grant makers and the public alike ignore this at their peril, which includes losing opportunities to make evaluation and social impact development more accountable, more dynamic and impactful.

Photo credit (main): Genome by Quinn Dombrowski used under Creative Commons License via Flickr. Thanks for sharing Quinn!

About the author: Cameron Norman is the Principal of Cense Research + Design and assists organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.

Benchmarking change

The quest for excellence within social programs relies on knowing what excellence means and how programs compare against others. Benchmarks can enable us to compare one program to another if we have quality comparators and an evaluation culture to generate them – something we currently lack. 

5797496800_7876624709_b

A benchmark is something used by surveyors to provide a means of holding a levelling rod to determine some consistency in elevation measurement of a particular place that could be compared over time. A benchmark represents a fixed point for measurement to allow comparisons over time.

The term benchmark is often used in evaluation as a means of providing comparison between programs or practices, often taking one well-understood and high performing program as the ‘benchmark’ to which others are compared. Benchmarks in evaluation can be the standard to which other measures compare.

In a 2010 article for the World Bank (PDF), evaluators Azevedo, Newman and Pungilupp, articulate the value of benchmarking and provide examples for how it contributes to the understanding of both absolute and relative performance of development programs. Writing about the need for benchmarking, the authors conclude:

In most benchmarking exercises, it is useful to consider not only the nature of the changes in the indicator of interest but also the level. Focusing only on the relative performance in the change can cause the researcher to be overly optimistic. A district, state or country may be advancing comparatively rapidly, but it may have very far to go. Focusing only on the relative performance on the level can cause the researcher to be overly pessimistic, as it may not be sufficiently sensitive to pick up recent changes in efforts to improve.

Compared to what?

One of the challenges with benchmarking exercises is finding a comparator. This is easier for programs operating with relatively simple program systems and structures and less so for more complex ones. For example, in the service sector wait times are a common benchmark. In the province of Ontario in Canada, the government provides regularly updated wait times for Emergency Room visits via a website. In the case of healthcare, benchmarks are used in multiple ways. There is a target that is used as the benchmark, although, depending on the condition, this target might be on a combination of aspiration, evidence, as well as what the health system believes is reasonable, what the public demands (or expects) and what the hospital desires.

Part of the problem with benchmarks set in this manner is that they are easy to manipulate and thus raise the question of whether they are true benchmarks in the first place or just goals.

If I want to set a personal benchmark for good dietary behaviour of eating three meals a day, I might find myself performing exceptionally well as I’ve managed to do this nearly every day within the last three months. If the benchmark is consuming 2790 calories as is recommended for someone of my age, sex, activity levels, fitness goals and such that’s different. Add on that, within that range of calories, the aim is to have about 50% of those come from carbohydrates, 30% from fat and 20% from protein, and we a very different set of issues to consider when contemplating how performance relates to a standard.

One reason we can benchmark diet targets is that the data set we have to set that benchmark is enormous. Tools like MyFitnessPal and others operate to use benchmarks to provide personal data to its users to allow them to do fitness tracking using these exact benchmarks that are gleaned from having 10’s of thousands of users and hundreds of scientific articles and reports on diet and exercise from the past 50 years. From this it’s possible to generate reasonably appropriate recommendations for a specific age group and sex.

These benchmarks are also possible because we have internationally standardized the term calorie. We have further internationally recognized, but slightly less precise, measures for what it means to be a certain age and sex. Activity level gets a little more fuzzy, but we still have benchmarks for it. As the cluster of activities that define fitness and diet goals get clustered together we start to realize that it is a jumble of highly precise and somewhat loosely defined benchmarks.

The bigger challenge comes when we don’t have a scientifically validated standard or even a clear sense of what is being compared and that is what we have with social innovation.

Creating an evaluation culture within social innovation

Social innovation has a variety of definitions, however the common thread of these is that its about a social program aimed at address social problems using ideas, tools, policies and practices that differ from the status quo. Given the complexity of the environments that many social programs are operating, it’s safe to assume that social innovation** is happening all over the world because the contexts are so varied. The irony is that many in this sector are not learning from one another as much as they could, further complicating any initiative to build benchmarks for social programs.

Some groups like the Social Innovation Exchange (SIX) are trying to change that. However, they and others like them, face an uphill battle. Part of the reason is that social innovation has not established a culture of evaluation within it. There remains little in the way of common language, frameworks, or spaces to share and distribute knowledge about programs — both in description and evaluation — in a manner that is transparent and accessible to others.

Competition for funding, the desire to paint programs in a positive light, lack of expertise, not enough resources available for dissemination and translation, absence of a dedicated space for sharing results, and distrust or isolation from academia among certain sectors are some reasons that might contribute to this. For example, the Stanford Social Innovation Review is among the few venues dedicated to scholarship in social innovation aimed at a wide audience. It’s also a venue focused largely on international development and what I might call ‘big’ social innovation: the kind of works that attract large philanthropic resources. There’s lot of other types of social innovation and they don’t all fit into the model that SSIR promotes.

From my experiences, many small organizations or initiatives struggle to fund evaluation efforts sufficiently, let alone the dissemination of the work once it’s finished. Without good quality evaluations and the means to share their results — whether or not they cast a program in positive light or not — it’s difficult to build a culture where the sector can learn from one another. Without a culture of evaluation, we also don’t get the volume of data and access to comparators — appropriate comparators, not just the only things we can find — to develop true, useful benchmarks.

Culture’s feast on strategy

Building on the adage attributed to Peter Drucker that culture eats strategy for breakfast (or lunch) it might be time that we use that feasting to generate some energy for change. If the strategy is to be more evidence based, to learn more about what is happening in the social sector, and to compare across programs to aid that learning there needs to be a culture shift.

This requires some acknowledgement that evaluation, a disciplined means of providing structured feedback and monitoring of programs, is not something adjunct to social innovation, but a key part of it. This is not just in the sense that evaluation provides some of the raw materials (data) to make informed choices that can shape strategy, but that it is as much a part of the raw material for social change as enthusiasm, creativity, focus, and dissatisfaction with the status quo on any particular condition.

We are seeing a culture of shared ownership and collective impact forming, now it’s time to take that further and shape a culture of evaluation that builds on this so we can truly start sharing, building capacity and developing the real benchmarks to show how well social innovation is performing. In doing so, we make social innovation more respectable, more transparent, more comparable and more impactful.

Only by knowing what we are doing and have done can we really sense just how far we can go.

** For this article, I’m using the term social innovation broadly, which might encompass many types of social service programs, government or policy initiatives, and social entrepreneurship ventures that might not always be considered social innovation.

Photo credit: Redwood Benchmark by Hitchster used under Creative Commons License from Flickr.

About the author: Cameron Norman is the Principal of Cense Research + Design and works at assisting organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.

My troubled relationship with social media

6847365223_79666079f5_o

Do you care about donuts? I did, once. I’m not so sure anymore.

I used to love donuts, was passionate about donuts and spent the better part of my early career looking at the power of social media to transform our understanding of and engagement with donuts. Just this week, I had a paper published that I co-authored with colleagues looked at Twitter is being used to engage audiences on donuts, er vaping and it’s potential public health implications. I’m still into donuts, but the question is whether donuts are still serving the purpose they once did. It’s left me asking….

Is it still time to make the donuts?

Twitter turned 10 this past month. When it was founded the idea of communicating short 140 character chunks of content to the world by default (unlike Facebook, where you could restrict you posts to your ‘friends’ only by default), the idea seemed absurd, particularly to me. Why would anyone want to use something that was the equivalent of a Facebook status update without anything else? (Keep in mind that link shorteners were not yet in wide use, the embedded pictures and lists that we have now were either not invented or highly cumbersome).

However, social media is a ‘participation sport’ as I like to say and by engaging with it I soon realized Twitter’s enormous potential. For the first time I could find people who had the same quirky collection of interests as I did (e.g, systems science, design, innovation, Star Wars, coffee, evaluation, soccer, politics, stationary and fine writing instruments – and not necessarily in that order, but in that combination) and find answers to questions I didn’t think to ask from people I didn’t know existed.

It was a wonder and I learned more about the cutting edge of research there than I ever did using traditional databases, conferences or books much to the shock, horror and disbelief of my professional colleagues. I’ve often been considered an early adopter and this was no exception. I did research, consultation and training in this area and expanded my repertoire to Instagram, Pinterest, YouTube, LinkedIn and pretty much everything I could including some platforms that no longer exist.

I developed relationships with people I’d never (and still have never) met from around the world who’s camaraderie and collegiality I valued as much or more than those people I’d known for years in the flesh. It was heady times.

But like with donuts, it’s possible to have too much of a good thing. And also like donuts, where I once loved them and enjoyed them regularly consuming them now starts to not sit so well and that’s maybe for the better.

I’m left questioning whether it’s still time to make the donuts.

The river I stand in

This river I step in is not the river I stand in – Heraclitus

Like with donuts the experience of social media — the context of its use — has changed. As I age, eat better, exercise more wisely and am more mindful of how I feel and what I do, donuts lost appeal. They probably taste the same, but the experience has changed and not because the donuts are different, but my dietary and lifestyle context is.

The same is true for social media.

I have never been a techno advocate or pessimist, rather I’ve been a pragmatist. Social media does things that traditional media does not. It helps individuals and organizations communicate and, depending on how its used, engage an audience interactively in ways that ‘old media’ like billboards, radio, TV and pamphlets do not. But we still have the old media, we just recognize that it’s good at particular things and not others.

But the river, the moving and transforming media landscape, is much faster, bigger and bolder than it was before. Take the birthday girl or boy, Twitter, it’s grown to be a ubiquitous tool for journalists, celebrities and scholars, but saw a small decline in its overall use after a year of flatlined growth.

TwitterMonthlyActive 2016-04-01 13.50.14(Twitter monthly users via Tech Crunch)

As for Facebook, it’s faring OK. While it still has growth, I’ve struggled to find anyone who speaks in glowing terms about their experience with the service, particularly anyone who wishes to change their privacy settings or wishes to stem the flow of ads. Over at Instagram, my feed has seen the rise of ‘brands’ following me. No longer is it the names of real people (even if its a nickname) it’s usually some variant of ‘getmorefollowers’ or brands or something like that. This is all as I see more ads and less life.

Information overload and filter failure

Speaking to an audience in 2008, author and media scholar Clay Shirky spoke to the problem of ‘information overload’ which was a term being applied to the exponential rise in exposure people had to information thanks to the Internet and World Wide Web. At the time, his argument was that it was less about overload of information, than a failure of our filter systems to make sense of what was most useful. 

But that was 2008. That was before the mobile Internet really took off. That was when Twitter was 2 and Facebook just a couple years later. In the third quarter of 2008, Facebook had around 100,000 users and now its got a population of more 1.6B users. The river has got bigger and more full. That might be nice if you’re into white water rafting or building large hydro-electric dams, but it might be less enjoyable if you’re into fly fishing. I can’t imagine A River Runs Through It with a water feature that’s akin to Niagara Falls.

As journalist Douglas Rushkoff has pointed out in many different fora, the Internet is changing the way we think.  Indeed, ‘smarter technologies’ are changing the way we live.

This all brings up a dilemma: what to do? As one who has studied and advised organizations on how to develop and implement social media strategies I would be a hypocrite to suggest we abandon them. Engaging with an audience is better than not doing so. Humanizing communications – which is something social media can do far better than speaking ‘at’ people — is better than not. Being timely and relevant is also better than not. Yet, the degree to which social media can answer these problems is masked by the volume of content out there and the manner in which people interact with content.

Walking through any major urban area, take public transit, or watching people in line for pretty much anything will find a substantial portion of humans looking at their devices. Even couples or friends at restaurants are left to concoct games to get people paying attention to each other, not their devices. We are living in the attentional economy and what is increasingly valuable is focus, not necessarily more information and that requires filtration systems that are not overwhelmed by the volume of content.

Emotional pollution and the antisocial media

I recently wrote about how ‘the stream’ of social media has changed the way that social activism and organizing is done. While social media was once and invaluable tool for organizing and communicating ideas, its become a far more muddled set of resources in recent years. To be sure, movements like Black Lives Matter and others that promote more democratic, active social engagement on issues of justice and human dignity are fuelled and supported by social media. This is a fantastic thing for certain issues, but the question might be left: for how long?

Not so long ago, my Facebook feed was filled with the kind of flotsam, jetsam and substance of everyday life. This was about pictures of children or vacations, an update on someone’s new job or their health, or perhaps a witty observation on human life, but the substance of the content was the poster, the person. Now, it is increasingly about other people and ‘things’ . It’s about injustices to others and the prejudices that come with that, it’s about politics (regardless of how informed people are), it’s about solidarity with some groups (at the willful ignorance of others) and about rallying people to some cause or another.

While none of these are problematic — and actually quite healthy in some measure — they are almost all I see. On Twitter, people are sharing other things, but rarely their own thoughts. On Facebook, it’s about sharing what others have written and the posters emotional reaction to it.

Increasingly, it’s about social validation. Believe my idea. “Like” this post if you’re really my friend. Share if you’re with me and not with them. And so on.

What I am left with, increasingly, is a lost sense of who the ‘me’ and the ‘them’ are in my social media stream. What it feels is that I am increasingly wading into a stream of emotional pollution rather than human interaction. And when my filters are full, this gets harder to do and I’m not sure I want to be less sensitized to the world, but I also don’t want my interactions with others to be solely about reacting to their rage at the world or some referendum on their worldview. It seems that social media is becoming anti-social media.

In complex systems we might see this is as a series of weak, but growing stronger, signals of something else. Whether that’s collective outrage at the injustices of the world, the need for greater support, or the growing evidence that social media use can be correlated with a sense of loneliness, I’m not sure.

But something is going on and I’m now beginning to wonder about all those donuts we’ve created.

Photo credit: Chris Lott Social Media Explained (with Donuts) used under Creative Commons License via Flickr

About the author: Cameron Norman is the Principal of Cense Research + Design and works at assisting organizations and networks in creative learning through design, program evaluation, behavioural science and system thinking.

 

E-Valuing Design and Innovation

5B247043-1215-4F90-A737-D35D9274E695

Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change? 

What is the value of design?

If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.

Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.

‘Valuating’ design, evaluating innovation

On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:

Program design is both a verb and a noun.

It is the process that organizations use to develop a program.  Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.

A program design is also the plan of action that results from that process.  Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.

One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.

The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.

We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.

658beggar_KeepCoinsChange

A bigger perspective on value creation

The process of placing or determining value* of a program is about looking at three things:

1. The plan (the program design);

2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);

3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).

Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.

A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.

One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.

Designing opportunities to learn more

Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.

In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.

Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.

Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’ Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.

Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.

Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.

When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.

*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.

Image credit: author

The hidden cost of learning & innovation

7887767996_13f27106fa_k

The costs of books, materials, tuition, or conference fees often distort the perception of how much learning costs, creating larger distortions in how we perceive knowledge to benefit us. By looking at what price we pay for integrating knowledge and experience we might re-valuate what we need, what we have and what we pay attention to in our learning and innovation quest. 

A quote paraphrased and attributed to German philosopher Arthur Schopenhauer points to one of the fundamental problems facing books:

Buying books would be a good thing if one could also buy the time to read them in: but as a rule the purchase of books is mistaken for the appropriation of their contents.

Schopenhauer passed away in 1860 when the book was the dominant media form of codified knowledge and the availability of books was limited. This was before radio, television, the Internet and the confluence of it all in today’s modern mediascape from Amazon to the iPhone and beyond.

Schopenhauer exposes the fallacy of thought that links having access to information to knowledge. This fallacy underpins the major challenges facing our learning culture today: quantity of information vs quality of integration.

Learning time

Consider something like a conference or seminar. How often have you attended a talk or workshop and been moved by what you heard and saw, took furious notes, and walked out of the room vowing to make a big change based on what you just experienced? And then what happened? My guess is that the world outside that workshop or conference looked a lot different than it appeared in it. You had emails piled up, phone messages to return, colleagues to convince, resources to marshall, patterns to break and so on.

Among the simple reasons is that we do not protect the time and resources required to actually learn and to integrate that knowledge into what we do. As a result, we mistakenly look at the volume of ‘things’ we expose ourselves to for learning outcomes.

One solution is to embrace what consultant, writer and blogger Sarah Van Bargen calls “intentional ignorance“. This approach involves turning away from the ongoing stream of data and accepting that there are things we won’t know and that we’ll just miss. Van Bargen isn’t calling for a complete shutting of the door, rather something akin to an information sabbatical or what some might call digital sabbath. Sabbath and sabbatical share the Latin root sabbatum, which means “to rest”.

Rebecca Rosen who writes on work and business for The Atlantic argues we don’t need a digital sabbath, we need more time. Rosen’s piece points to a number of trends that are suggesting the way we work is that we’re producing more, more often and doing it more throughout the day. The problem is not about more, it’s about less. It’s also about different.

Time, by design

One of the challenges is our relationship to time in the first place and the forward orientation we have to our work. We humans are designed to look forward so it is not a surprise that we engineer our lives and organizations to do the same. Sensemaking is a process that orients our gaze to the future by looking at both the past and the present, but also by taking time to look at what we have before we consider what else we need. It helps reduce or at least manage complex information to enable actionable understanding of what data is telling us by putting it into proper context. This can’t be done by automation.

It takes time.

It means….

….setting aside time to look at the data and discuss it with those who are affected by it, who helped generate it, and are close to the action;

….taking time to gather the right kind of information, that is context-rich, measures things that have meaning and does so with appropriate scope and precision;

….understanding your enterprises’ purpose(s) and designing programs to meet such purposes, perhaps dynamically through things like developmental evaluation models and developmental design;

….create organizational incentives and protections for people to integrate what they know into their jobs and roles and to create organizations that are adaptive enough to absorb, integrate and transform based on this learning — becoming a true learning organization.

By changing the practices within an organization we can start shifting the way we learn and increase the likelihood of learning taking place.

Buying time

Imagine buying both the book and the time to read the book and think about it. Imagine sending people on courses and then giving them the tools and opportunity to try the lessons (the good ones at least) in practice within the context of the organization. If learning is really a priority, what kind of time is given to people to share what they know, listen to others, and collectively make sense of what it means and how it influences strategy?

What we might find is that we do less. We buy less. We attend less. We subscribe to less. Yet, we absorb more and share more and do more as a result.

The cost of learning then shifts — maybe even to less than we spend now — but what it means is that we factor in time not just product in our learning and knowledge production activities.

This can happen and it happens through design.

CreateYourFuture

Photo credit by Tim Sackton used under Creative Commons License via Flickr.

Abraham Lincoln quote image from TheQuotepedia.

%d bloggers like this: