Results for: standing still

behaviour changecomplexityeducation & learningpsychologypublic health

Standing Still

One of my favourite quotes is from Giuseppe Tomasi di Lampedusa‘s posthumously published novel: The Leopard. The story is about a artistocratic family and their fall from the ranks in society. In the book there is a marvellous quote that reflects the most fundamental challenges of system dynamics:”If we want things to stay as they are, things will have to change.”

I'm Forever Standing Still

I'm Forever Standing Still..Or Am I?

At its core, the message is that we cannot avoid change by standing still, rather only through change can we hope to achieve consistency. And that, is unlikely. We lose our position unless we move along with everyone else, even if in the process of moving it appears as if we are standing still. (Just think of cars on a highway. Two cars driving side-by-side at the same relative speed will look to each other as if they are not moving much at all, when in reality they may be cruising at a very high rate of speed).

We are rarely aware of the speed at which we are traveling, that is the rate of change that is taking place around us and within us. The human body renews itself many times over throughout the lifespan. Our cells are brand new, yet our looks appear at first to be quite similar from day to day. That is, until someone uncovers a picture of us as a child, a youth, a twenty-, thirty-, any-something that is far enough removed from our current state that we realize the profound change that has taken place.

Systems are enormously difficult to change for that very reason. There is not only constant movement, but lots of it and the impact of each component on everything else is different, dynamic and inconsistent. I am currently helping graduate students in public health learn about systems and, while the teaching is fun and the students are interested, the challenge to communicate the language of systems in a manner that is easy to understand is difficult. Indeed, there is little reason why teaching complexity science should be simple given that one of the principles of systems science is that complex problems require complex solutions.

But thankfully one of the other features of complex systems is the presence of paradox. And one of the tools I’ve found works wonderfully is mindfulness-based reflection. Mindfulness is the process of ‘standing still’ by calming the mind and attending the signals around us without trying to influence them. Remarkably, by keeping still and just paying attention to what is around you without ascribing feelings, thoughts, or attitudes towards something we can learn a great deal about what is going on around us. This is a strategy that has been highly effective as a technique in addressing complex health conditions like chronic pain and addictions and training those who work in areas like this.

The question I have is this: How do we get our social institutions and communities to do the equivalent of paying attention to its breath and relaxing its mind to see the systems that they are a part of in order to initiate healthy change?

That is the challenge I am putting to my students and myself and to you too, dear reader.

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

complexityresearchsystems sciencesystems thinking

Mindful Systems

 

The benefits of standing still and looking around at the systems around us never cease to reveal themselves.

Mindfulness is something that is most often associated with individuals. Mindfulness is a pillar of Buddhist practice and is increasingly being used in clinical settings to help people deal with stress and pain.

Mindfulness sometimes get unfairly linked to individuals, groups and movements that, for lack of a better term, could be described as ‘flaky’. Its association with many spiritual movements can also be problematic for those who are looking for something more aligned with science and less about religion or spirituality. Yet, the spiritual and scientific benefits of mindfulness need not be incompatible. Google, while innovative and often unusual in the way it runs its business, is certainly not flaky. As a company, it understands the power of mindfulness and has hosted a few talks on its application to everyday life and its neuroscientific foundations and benefits. For companies like Google, promoting mindfulness yields health benefits to its individual staff members, but also to its bottom line because being mindful as a company allows them to see trends and the emergence of new patterns in how people use the Internet and search for information. Indeed, one could say that Google with its search engine and productivity tools could be the ultimate mindfulness company, aiding us to become aware of the world around us (on the Internet anyway).

We are often profoundly ignorant of the systems that we are a part of and while the idea of having us all sit and mediate might sound appealing (particularly those of us who could use a moment of peace!) it is not a reasonable proposition. One of the things that meditation does is enable the mediator to become aware of themselves and their surroundings often through a type of mental visualization. Visualization allows the observer to see the relationships between entities in a system, their proximity, and the extended relationships beyond themselves. In systems research and evaluation, this might be done through the application of social network analysis or a system dynamics model. Through these kinds of tools that allow us to enhance visualization potential of systems, this is almost akin to creating a mindful systems thinking tool.

My colleague Tim Huerta and I have been developing methods and strategies to incorporate social network analysis into organizational decision making and published a paper in 2006 on how this could be done to support the development of communities of practice in tobacco control.  I’m also working on creating a system dynamics model of the relationships within the gambling system in Ontario with David Korn and Jennifer Reynolds.

By creating visuals of what the system looks like consciousness raising takes place and the invisible connections become visible. And by making things visible the impact, reach, scope and potential opportunities for collaboration and action are made aware. And with awareness comes insight into the connections between actions and consequences (past, current and potential) and that allows us to strategize ways to minimize or amplify such effects as necessary.

strategic foresight

Foresight, Growth and the J-shaped Curve

The business of futures is to see what possibilities lay ahead to better anticipate how to meet them when or if they become reality. When this story line follows a linear path this is a lot easier; when it follows a more complex path reality can bite.

Foresight models look at trends and curves in trajectories of things including those that might disrupt the status quo. Using tools and frameworks (PDF), foresight professionals and futurists seek to better understand the contributors (drivers) and patterns associated with decisions, activities, and circumstances to anticipate what might come and better prepare for it (strategic foresight). Foresight is being used in fields ranging from natural resource management to energy policy to healthcare planning.

A rational look at foresight finds many reasons to embrace it for an organization. Who wouldn’t want to have a better sense of what is coming and prepare for it? The problem foresight poses is that it can lead people to look for the right things in the wrong way and that has everything to do with our human tendencies to see narrative arcs in the stories we tell ourselves instead of seeing either exponential or j-shaped curves.

Both of these models for data have enormous consequences for how we understand some of our greatest challenges as humans and as organizations as we shall see.

Exponential complication

A linear distribution or data structure is what humans see most easily. It’s the maintenance of a status quo, gradual change, or the progressive rise and fall of something over time. It’s what we see when we see in most trends and patterns. This perspective has the tendency to view much of the system in which this change takes place as relatively stable.

Stability is largely a matter of perspective. Everything is in motion to some degree; its the rate of change that we notice. In linear systems, that rate of change is relatively consistent or at a pace we understand while exponential change (or growth curves) are more challenging to see — and potentially more dangerous as the video below illustrates.

Al Bartlett’s lecture and other notes provide just one example exponential growth and how our perception is challenged by these kind of data structures in the world and the systemic effects they can bring.

Without an understanding of the growth dynamics associated with a particular phenomenon, we are at risk of grossing under-estimating the potential implications of what might happen. In these cases we need fixes, but not just any fixes as we shall see.

Deceptive Fixes

Another type of curve that can distort foresight models is the ‘j-shaped curve’. This curve describes situations where there is a long-term trend that is briefly countered in the short-term. An example is the case of alcohol consumption and health. There is evidence that alcohol consumption (e.g., a glass of wine or beer) can have a beneficial effect on a person’s health (at the population level, individual results might vary significantly). However, beyond that certain amount — that varies by person — and alcohol becomes toxic and can substantially contribute to a variety of health problems, injuries, and premature death. The j-shaped curve forms from data showing a mild reduction in health risks associated with modest alcohol intake as illustrated below.

For alcohol use, a single drink can lower your mortality risk before the risk starts rising again. Contrast this against cigarette use where a linear pattern of risk is seen: the more you smoke at any level, the higher your risk. Both patterns have linearity to them, but one is far more deceptive in it’s short and long-term implications.

Where this can fool foresight researchers is that there may be a trend that is showing a certain set of properties assumed to be on the trajectory like that on the left hand side of the graph when it is really similar to the right. Depending on the time horizon you use to inform your decisions based on this data the implications could be markedly different and potentially catastrophic.

Our fixes or strategies to anticipate change based on the wrong model could actually serve to amplify the very problem we sought to solve. A possible example of this is the move to ban single-use plastic bags. While the evidence of the environmental impact of plastic is considerable, a shift from plastic bags has its own negative implications, including the increased manufacturing of (with resulting waste and potential increased consumerism from) reusable tote bags or the increased use of forest products to support paper bag production.

The loss of plastic shopping bags which are often re-used (despite being called single-use) as garbage liners is now resulting in more purchases of plastic-intensive garbage bags. If the systemic implications are not considered in the design of such policies, these well-meaning fixes can profoundly fail. What is needed is a change in the way we consume, store, and buy goods, not just carry them home.

Systems change changes systems

The idea that you could be surrounded by literally thousands of people, connected to most of the planet through a device that fits in the palm of your hand, and still experience profound loneliness would once be considered the most profound oxymoron to anyone born before 1980.

Yet, here we are in a state where the very fixes for connection are failing us. The benefits of social media, social connection, artificial intelligence, and new production methods (e.g. 3D printing) are now starting to show some negative effects on our social and economic systems. Are these linear progressions of technological advancement that are simply generating a few of the inevitable bumps along the way? Are they exponential trends about to explode and profoundly transform the way we live? Or are these j-shaped curved trends that once provided us the benefits of finding connection in the modern world only to entrench our social systems into being online, not off?

We are creating systems that are changing themselves and having profound effects on the fundamentals around us. Retail conveniences created by online shopping means changing the relationship we have with our local merchants and that changes their viability. Handheld computers like an iPhone are engineered to hold our attention; what happens when we stop paying attention to the world around us?

These are systems questions and ones that foresight — when applied well — holds some promise in allowing us to anticipate and maybe deal with before its too late.

We can’t see these things coming if we hold models of the future that are based on a linear framing of what is happening now and what is to come. We also can’t adapt if we assume that even non-linear change will take place and persist within the same system it started in. Systems change changes systems.

Data models are fundamental to foresight and understanding them is the key to knowing whether your ahead of the curve, behind the curve, or sitting in the middle of the letter J.

Photo Credits: Ricardo Gomez Angel on Unsplash and Cameron Norman

design thinking

Leadership & Design Thinking: Missed Opportunities

A recent article titled ‘The Right Way to Lead Design Thinking’ gets a lot of things wrong not because of what it says, but because of the way it says it. If we are to see better outcomes from what we create we need to begin with talking about design and design thinking differently.

I cringed when I first saw it in my LinkedIn feed. There it was: The Right Way to Lead Design Thinking. I tend to bristle when I see broad-based claims about the ‘right’ or ‘wrong’ way to do something, particularly with something so scientifically bereft as design thinking. Like others, I’ve called out much of what is discussed as design thinking for what I see as simple bullshit.

To my (pleasant) surprise, this article was based on data, not just opinion, which already puts it in a different class than most other articles on design thinking, but that doesn’t earn it a free pass. In some fairness to the authors, the title may not be theirs (it could be an editor’s choice), but what comes afterward still bears some discussion less about what they say, but how they say it and what they don’t say. This post reflects some thoughts on this work.

How we talk about what we do shapes what we know and the questions we ask and design thinking is at a state where we need to be asking bigger and better questions of it.

Right and Wrong

The most glaring critique I have of the article is the aforementioned title for many reasons. Firstly, the term ‘right’ assumes that we know above all how to do something. We could claim this if we had a body of work that systematically evaluated the outcomes associated with leadership and design thinking or research examining the process of doing design thinking. The issue is: we don’t.

There isn’t a definition of design thinking that can be held up for scrutiny to test or evaluate so how can we claim the ‘right’ way to do it? The authors link to a 2008 HBR article by Tim Brown that outlines design thinking as its reference source, however, that article provides scant concrete direction for measurement or evaluation, rather it emphasizes thinking and personality approaches to addressing design problems and a three-factor process model of how it is done in practice. These might be useful as tools, but they are not something you can derive indicators (quantitative or qualitative) to inform a comparison.

The other citation is a 2015 HBR article from Jon Kolko. Kolko is one of design’s most prolific scholars and one of the few who actively and critically writes about the thinking, doing, craft, teaching, and impact of design on the people, places, and systems around us. While his HBR article is useful in painting the complexity that besets the challenge of designers doing ‘design thinking’, it provides little to go from in developing the kind of comparative metrics that can inform a statement to say something is ‘right’ or ‘wrong’. It’s not fit for that purpose (and I suspect was never designed for that in the first place).

Both of these reference sources are useful for those looking to understand a little about what design thinking might be and how it could be used and few are more qualified to speak on such things as Tim Brown and Jon Kolko. But if we are to start taking design thinking seriously, we need to go beyond describing what it is and show what it does (and doesn’t do) and under what conditions. This is what serves as the foundation for a real science of practice.

The authors do provide a description of design thinking later in the article and anchors that description in the language of empathy, something that has its own problems.

Designers seek a deep understanding of users’ conditions, situations, and needs by endeavoring to see the world through their eyes and capture the essence of their experiences. The focus is on achieving connection, even intimacy, with users.

False Empathy?

Connecting to ideas and people

It’s fair to say that Apple and the Ford Motor Company have created a lot of products that people love (and hate) and rely on every day. They also weren’t always what people asked for. Many of those products were not designed for where people were, but they did shape where they went afterward. Empathizing with their market might not have produced the kind of breakthroughs like the iPod or automobile.

Empathy is a poor end in itself and the language used in this article treats it as such. Seeing the world through others’ eyes helps you gain perspective, maybe intimacy, but that’s all it does. Unless you are willing to take this into a systems perspective and recognize that many of our experiences are shared, collective, connected, and also disconnected then you only get one small part of the story. There is a risk that we over-emphasize the role that empathy plays in design. We can still achieve remarkable outcomes that create enormous benefit without being empathic although I think most people would agree that’s not the way we would prefer it. We risk confusing the means and ends.

One of the examples of how empathy is used in design thinking leadership takes place at a Danish hospital heart clinic where the leaders asked: “What if the patient’s time were viewed as more important than the doctor’s?” Asking this question upended the way that many health professionals saw the patient journey and led to improvements to a reduction in overnight stays. My question is: what did this produce?

What did this mean for the healthcare system as a whole? How about the professionals themselves? Are patients healthier because of the more efficient service they received? Who is deriving the benefits of this decision and who is bearing the risk and cost? What do we get from being empathic?

Failure Failings

Failure is among the most problematic of the words used in this article. Like empathy, failure is a commonly used term within popular writing on innovation and design thinking. The critique of this term in the article is less about how the authors use it explicitly, but that it is used at all. This may be as much a matter of the data itself (i.e., if you participants speak of it, therefore it is included in the dataset), however, its profile in the article is what is worth noting.

The issue is a framing problem. As the authors report from their research: “Design-thinking approaches call on employees to repeatedly experience failure”. Failure is a binary concept, which is not useful when dealing with complexity — something that Jon Kolko writes about in his article. If much of what we deal with in designing for human systems is about complexity, why are we anchoring our discussion to binary concepts such as ‘success’ and ‘failure’?

Failure exists only when we know what success looks like. If we are really being innovative, reframing the situation, getting to know our users (and discarding our preconceptions about them), how is it that we can fail? I have argued that the only thing we can steadfastly fail at in these conditions is learning. We can fail to build in mechanisms for data gathering, sensemaking, sharing, and reflecting that are associated with learning, but otherwise what we learn is valuable.

Reframing Our Models

The very fact that this article is in the Harvard Business Review suggests much about the intended audiences for this piece. I am sympathetic to the authors and my critique has focused on the details within the expression of the work, not necessarily the intent or capacity of those that created it. However, choices have consequences attached and the outcome of this article is that the framing of design thinking is in generating business improvements. Those are worthy goals, but not the only ones possible.

One of the reasons concepts like ‘failure’ apply to so much of the business literature is that the outcomes are framed in binary or simple terms. It is about improvement, efficiency, profit, and productivity. Business outcomes might also include customer satisfaction, purchase actions, or brand recognition. All of these benefit the company, not necessarily the customer, client, patient, person, or citizen.

If we were truly tackling human-centred problems, we might approach them differently and ask different questions. Terms like failure actually do apply within the business context, not because they support innovation per se, but because the outcomes are pre-set.

Leadership Roles

Bason and Austin’s research is not without merit for many reasons. Firstly, it is evidence-based. They have done the work by interviewing, synthesizing, commenting on, and publishing the research. That in itself makes it a worthy contribution to the field.

It also provides commentary and insight on some practical areas of design leadership that readers can take away right away by highlighting roles for leaders.

One of these roles is in managing the tension between divergent and convergent thought and development processes in design work. This includes managing the insecurities that many design teams may express in dealing with the design process and the volume of dis-organized content it can generate.

The exemplary leaders we observed ensured that their design-thinking project teams made the space and time for diverse new ideas to emerge and also maintained an overall sense of direction and purpose. 

Bason & Austin, HBR 2019

Another key role of the design leader is to support future thinking. By encouraging design teams to explore and test their work in the context of what could be, not just what is, leaders reframe the goals of the work and the outcomes in ways that support creativity.

Lastly, a key strength of the piece was the encouragement of multi-media forms of engagement and feedback. The authors chose to illustrate how leaders supported their teams in thinking differently about not only the design process but the products for communicating that process (and resulting products) to each other and the outside world. Too often the work of design is lost in translation because the means of communication have not been designed for the outcomes that are needed — something akin to design-driven evaluation.

Language, Learning, Outcomes

By improving how we talk about what we do we are better at framing how to ask questions about what we do and what impact it has. Doing the right thing means knowing what the wrong this is. Without evaluation, we run the risk in Design of doing what Russell Ackoff cautioned against: Doing the wrong things righter.

A read between the lines of the data — the stories and examples — that were presented in the article by Bason and Austin is the role of managing fear — fear of ‘failure’, fear from confusion, fear of not doing good work. Design, if it is anything, is optimistic in that it is about making an effort to try and solve problems, taking action, and generating something that makes a difference. Design leadership is about supporting that work and bringing it into our organizations and making it accessible.

That is an outcome worth striving for. While there are missed opportunities here, there is also much to build on and lead from.

Lead Photo by Quino Al on Unsplash

Inset Photo by R Mo on Unsplash

evaluationsocial systems

Baby, It’s Cold Outside (and Other Evaluation Lessons)

Competing desires or imposing demands?

The recent decision by many radio stations to remove the song “Baby, It’s Cold Outside” from their rotation this holiday season provides lessons on culture, time, perspective, and ethics beyond the musical score for those interested in evaluation. The implications of these lessons extend far beyond any wintery musical playlist. 

As the holiday season approaches, the airwaves, content streams, and in-store music playlists get filled with their annual turn toward songs of Christmas, the New Year, Hanukkah, and the romance of cozy nights inside and snowfall. One of those songs has recently been given the ‘bah humbug’ treatment and voluntarily removed from playlists, initiating a fresh round of debates (which have been around for years) about the song and its place within pop culture art. The song, “Baby, It’s Cold Outside” was written in 1944 and has been performed and recorded by dozens of duets ever since. 

It’s not hard for anyone sensitive to gender relations to find some problematic issues with the song and the defense of it on the surface, but it’s once we get beneath that surface that the arguments become more interesting and complicated. 

One Song, Many Meanings

One of these arguments has come from jazz vocalist Sophie Millman, whose take on the song on the CBC morning radio show Metro Morning was that the lyrics are actually about competing desires within the times, not a work about predatory advances.

Others, like feminist author Cammila Collar, have gone so far to describe the opposition to the song as ‘slut shaming‘. 

Despite those points (and acknowledging some of them), others suggest that the manipulative nature of the dialogue attributed to the male singer is a problem no matter what year the song was written. For some, the idea that this was just harmless banter overlooks the enormous power imbalance between genders then and now when men could impose demands on women with fewer implications. 

Lacking a certain Delorean to go back in time to fully understand the intent and context of the song when it was written and released, I came to appreciate that this is a great example of some of the many challenges that evaluators encounter in their work. Is “Baby, It’s Cold Outside” good or bad for us? Like with many situations evaluators encounter: it depends (and depends on what questions we ask). 

Take (and Use) the Fork

Yogi Berra famously suggested (or didn’t) that “when you come across a fork in the road, take it.” For evaluators, we often have to take the fork in our work and the case of this song provides us with a means to consider why.

A close read of the lyrics and a cursory knowledge of the social context of the 1940s suggests that the arguments put forth by Sophie Millman and Cammila Collar have some merit and at least warrant plausible consideration. This might just be a period piece highlighting playful, slightly romantic banter between a man and woman on a cold winter night. 

At the same time, what we can say with much more certainty is that the song agitates many people now. Lydia Liza and Josiah Lemanski revised the lyrics to create a modern, consensual take on the song, which has a feel that is far more in keeping with the times. This doesn’t negate the original intent and interpretation of the lyrics, rather it places the song in the current context (not a historical one) and that is important from an evaluative standpoint.

If the intent of the song is to delight and entertain then what once worked well now might not. In evaluation terms, we might say the original merit of the song may hold based on historical context, its worth has changed considerably within the current context.

We may, as Berra might have said, have to take the fork and accept two very different understandings within the same context. We can do this by asking some specific questions. 

Understanding Contexts

Evaluators typically ask of programs (at least) three questions: What is going on? What’s new? and What does it mean? In the case of Baby, It’s Cold Outside, we can see that the context has shifted over the years, meaning that no matter how benign the original intent, the potential for misinterpretation or re-visioning of the intent in light of current times is worth considering.

What is going on is that we are seeing a lot of discussion about the subject matter of a song and what it means in our modern society. This issue is an attractor for a bigger discussion of historical treatment, inequalities, and the language and lived experience of gender.

The fact that the song is still being re-recorded and re-imagined by artists illustrates the tension between a historical version and a modern interpretation. It hasn’t disappeared and it may be more known now than ever given the press it receives.

What’s new is that society is far more aware of the scope and implications of gender-based discrimination, violence, and misogyny in our world than before. It’s hard to look at many historical works of art or expression without referencing the current situation in the world. 

When we ask about what it means, that’s a different story. The myriad versions of the song are out there on records, CD’s, and through a variety of streaming sources. While it might not be included in a few major outlets, it is still available. It is also possible to be a feminist and challenge gender-based violence and discrimination and love or leave the song. 

The two perspectives may not be aligned explicitly, but they can be with a larger, higher-level purpose of seeking empowerment and respect for women. It is this context of tension that we can best understand where works like this live. 

This is the tension in which many evaluations live when dealing with human services and systems. There are many contexts and we can see competing visions and accept them both, yet still work to create a greater understanding of a program, service, or product. Like technology, evaluations aren’t good or bad, but nor are they neutral. 

Image credit MGM/YouTube via CBC.ca

Note: The writing article happened to coincide with the anniversary of the horrific murder of 14 women at L’Ecole Polytechnique de Montreal. It shows that, no matter how we interpret works of art, we all need to be concerned with misogyny and gender-based violence. It’s not going away.  

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

behaviour changebusinesspublic healthsocial mediasystems science

Genetic engineering for your brand

shutterstock_551281720.jpg

DNA doesn’t predetermine our future as biological beings, but it does powerfully influence it. Some have applied the concept of ‘DNA’ to a company or organization, in the same way, it’s applied to biological organisms. Firms like PWC have been at the forefront of this approach, developing organizational DNA assessments and outlining the principles that shape the DNA of an organization. A good brand is an identity that you communicate with yourself and the world around you. A healthy brand is built on healthy DNA.

Tech entrepreneur and writer Om Malik sees DNA as being comprised of those people that form the organization:

DNA contains the genetic instructions used to build out the cells that make up an organism. I have often argued that companies are very much like living organisms, comprised of the people who work there. What companies make, how they sell and how they invent are merely an outcome of the people who work there. They define the company.

The analogy between the DNA of a company as being that of those who make it up is apt because, as he points out, organizations reflect the values, habits, mindsets, and focus of those who run them. For that reason, understanding your organizations’ DNA structure might be critical to shaping the corporate direction, brand and promoting any type of change, as we see from the case of Facebook.

DNA dilemma: The case of Facebook

Facebook is under fire these days. To anyone paying enough attention to the social media giant the issue with Facebook isn’t that it’s happening now, but why it hasn’t happened sooner? Back when the site was first opened up to allow non-university students to have accounts (signaling what would become the global brand it is today) privacy was a big concern. I still recall listening to a Facebook VP interviewed on a popular tech podcast who basically sloughed off any concerns the interviewer had about privacy saying the usual “we take this seriously” stuff but offering no example of how that was true just as the world was about to jump on the platform. I’ve heard that same kind of interview repeated dozens of times since the mid-2000’s, including just nine months before Mark Zuckerberg’s recent ‘mea culpa’ tour.

Facebook has never been one to show much (real) attention to privacy because its business model is all about ensuring that users’ are as open as possible to collect as much data as possible from them to sell as many services to them, through them, about them, and for others to manipulate. The Cambridge Analytica story simply exposed what’s been happening for years to the world.

Anyone who’s tried to change their privacy settings knows that you need more than a Ph.D. to navigate them* and, even then, you’re unlikely to be successful. Just look at the case of Bobbi Duncan and Katie McCormick who were outed as gay to their families through Facebook even though they had locked down their own individual privacy settings. This is all part of what CEO Mark Zuckerberg and the folks at Facebook refer to as “connecting the social graph.”

The corporate biology of addiction

In a prescient post, Om Malik wrote about Facebook’s addiction to its business model based on sharing, openness, and exploitation of its users’ information mere weeks before the Cambridge Analytica story came out.

Facebook’s DNA is that of a social platform addicted to growth and engagement. At its very core, every policy, every decision, every strategy is based on growth (at any cost) and engagement (at any cost). More growth and more engagement means more data — which means the company can make more advertising dollars, which gives it a nosebleed valuation on the stock market, which in turn allows it to remain competitive and stay ahead of its rivals.

Whether he knew it or not, Malik was describing an epigenetic model of addiction. Much emerging research on addiction has pointed to a relationship between genes and addictive behaviour. This is a two-way street where genes influence behaviour and behaviour influences a person’s genes (something called epigenetics). The more Facebook seeks to connect through its model, the more it reinforces the behaviour, the more it feels a ‘need’ to do it and therefore repeats it.

In systems terms, this is called a reinforcing loop and is part of a larger field of systems science called systems dynamics. Systems dynamics have been applied to public health and show how we can get caught in traps and the means we use to get out of them.  By applying an addiction model and system dynamics to the organization, we might better understand how some organizations change and how some don’t.

Innovation therapy

The first step toward any behaviour change for an addiction is to recognize the addiction in the first place. Without acknowledgment of a problem, there can’t be much in the way of self-support. This acknowledgment has to be authentic, which is why there is still reason to question whether Facebook will change.

There are many paths to addiction treatment, but the lessons from treating some of the most pernicious behaviours like cigarette smoking and alcohol suggest that it is likely to succeed when a series of small, continuous, persistent changes are made and done so in a supportive environment. One needs to learn from each step taken (i.e., evaluate progress and outcomes from each step), to integrate that learning, and continue through the inevitable cycling through stages (non-linear change) that sometimes involves moving backward or not knowing where along the change journey you are.

Having regulations or external pressures to change can help, but too much can paralyze action and stymie creativity. And while being motivated to change is important, sometimes it helps to just take action and let the motivation follow.

If this sounds a lot like the process of innovation, you’re right.

Principled for change

Inspiring change in an organization, particularly one where there is a clear addiction to a business model (a way of doing things, seeing things, and acting) requires the kind of therapy that we might see in addiction support programs. Like those programs, there isn’t one way to do it, but there are principles that are common. These include:

  1. Recognize the emotional triggers involved. Most people suffering from addictions can rationalize the reasons to change, but the emotional reasons are a lot harder. Fear, attraction, and the risk of doing things differently can bubble up when you least expect it. You need to understand these triggers, deal with the emotional aspects of them — the baggage we all bring.
  2. Change your mindset. Successful innovation involves a change of practice and a change of mindset. The innovator’s mindset goes from a linear focus on problems, success, and failure to a non-linear focus on opportunities, learning, and developmental design.  This allows you to spot the reinforcing looping behaviour and addiction pathways as well as what other pathways are open to you.
  3. Create better systems, not just different behaviour. Complex systems have path-dependencies — those ruts that shape our actions, often unconsciously and out of habit. Consider ways you organize yourself, your organization’s jobs and roles, the income streams, the system of rewards and recognitions, the feedback and learning you engage with, and composition of your team.  This rethinking and reorganization are what changes DNA, otherwise, it will continue to express itself through your organization in the same way.
  4. Make change visible. Use evaluation as a means to document what you do and what it produces and continue to structure your work to serve the learning from this. Inertia comes from having no direction and nothing to work toward. We are beings geared towards constant motion and making things — it’s what makes us human. Make a change, by design. Make it visible through evaluation and visual thinking – including the ups, downs, sideways. A journey involves knowing where you are — even if that’s lost — and where you’re going (even if that changes).

Change is far more difficult than people often think. Change initiatives that are rooted solely in motivation are unlikely to produce anything sustainable. You need to get to the root, the DNA, of your organization and build the infrastructure around it to enable it to do the work with you, not against you. That, in Facebook terms, is something your brand and its champions will truly ‘Like’.

 

* Seriously. I have a Ph.D. and am reasonably tech literate and have sat down with others with similar educational backgrounds — Ph.D.’s, masters degrees, tech startup founders — and we collectively still couldn’t figure out the privacy settings as a group.

References: For those interested in system dynamics or causal loop modeling, check out this great primer from Nate Osgood at the University of Saskatchewan. His work is top-notch. Daniel Kim has also written some excellent, useful, and practical stuff on applying system dynamics to a variety of issues.

Image credit: Shutterstock used under license.

design thinkinginnovation

Design thinking is BS (and other harsh truths)

Ideas&Stairs

Design thinking continues to gain popularity as a means for creative problem-solving and innovation across business and social sectors. Time to take stock and consider what ‘design thinking’ is and whether it’s a real solution option for addressing complex problems, over-hyped BS, or both. 

Design thinking has pushed its way from the outside to the front and centre of discussions on innovation development and creative problem-solving. Books, seminars, certificate programs, and even films are being produced to showcase design thinking and inspire those who seek to become more creative in their approach to problem framing, finding and solving.

Just looking through the Censemaking archives will find considerable work on design thinking and its application to a variety of issues. While I’ve always been enthusiastic about design thinking’s promise, I’ve also been wary of the hype, preferring to use the term design over design thinking when possible.

What’s been most attractive about design thinking has been that it’s introduced the creative benefits of design to non-designers. Design thinking has made ‘making things’ more tangible to people who may have distanced themselves from making or stopped seeing themselves as creative. Design thinking has also introduced a new language that can help people think more concretely about the process of innovation.

Design thinking: success or BS?

We now see designers elevated to the C-suite — including the role of university president in the case of leading designer John Maeda — and as thought leaders in technology, education, non-profit work and business in large part because of design thinking. So it might have surprised many to see Natasha Jen, a partner at the prestigious design firm Pentagram, do the unthinkable in a recent public talk: trash design thinking.

Speaking at the 99u Conference in New York this past summer, Jen calls out what she sees as the ‘bullshit’ of design thinking and how it betrays much of the fundamentals of what makes good design.

One of Jen’s criticisms of design thinking is how it involves the absence of what designers call ‘crit’: the process of having peers — other skilled designers — critique design work early and often. While design thinking models typically include some form of ‘evaluation’ in them, this is hardly a rigorous process. There are few guidelines for how to do it, how to deliver feedback and little recognition of who is best able to deliver the crit to peers (there are even guides for those who don’t know about the critique process in design). It’s not even clear who the best ‘peers’ are for such a thing.

The design thinking movement has emphasized how ‘everyone is a designer.’ This has the positive consequences of encouraging creative engagement in innovation from everyone, increasing the pool of diverse perspectives that can be brought to bear on a topic. What it ignores is that the craft of design involves real skill and just as everyone can dance or sing, not everyone can do it well. What has been lost in much of the hype around design thinking is the respect for craft and its implications, particularly in terms of evaluation.

Evaluating design thinking’s impact

When I was doing my professional design training I once got into an argument* with a professor who said: “We know design thinking works“. I challenged back: “Do we? How?” To which he responded: “Of course we do, it just does — look around.” (pointing to the room of my fellow students presumably using ‘design thinking’ in our studio course).

End of discussion.

Needless to say, the argument was — in his eyes — about him being right and me being a fool for not seeing the obvious. For me, it was about the fact that, while I believed in the power of the approach that was loosely called ‘design thinking’ offered something better than the traditional methods of addressing many complex challenges, I couldn’t say for sure that it ‘works’ and does ‘better’ than the alternatives. It felt like he was saying hockey is better than knitting.

One of the reasons we don’t know is that solid evaluation isn’t typically done in design. The criteria that designers typically use is client satisfaction with the product given the constraints (e.g., time, budget, style, user expectations). If a client says: “I love it!” that’s about all that matters.

Another problem is that design thinking is often used to tackle more complex challenges for which there may be inadequate examples to compare. We are not able to use a randomized controlled trial, the ‘gold-standard’ research approach, to test whether design thinking is better than ‘non-design thinking.’ The result is that we don’t really know what design thinking’s impact is in the products, services, and processes that it is used to create or at least enough to compare it other ways of working.

Showing the work

In grade school math class it wasn’t sufficient to arrive at an answer and simply declare it without showing your work. The broad field of design (and the practice of design thinking) emphasizes developing and testing prototypes, but ultimately it is the final product that is assessed. What is done on the way to the final product is rarely given much, if any attention. Little evaluation is done on the process used to create a design using design thinking (or another approach).

The result of this is that we have little idea of the fidelity of implementation of a ‘model’ or approach when someone says they used design thinking. There is hardly any understanding of the dosage (amount), the techniques, the situations and the human factors (e.g., skill level, cooperation, openness to ideas, personality, etc..) that contribute to the designed product and little of the discussion in design reports are made of such things.

Some might argue that such rigorous attention to these aspects of design takes away from the ‘art’ of design or that it is not amenable to such scrutiny. While the creative/creation process is not a science, that doesn’t mean it can’t be observed and documented. It may be that comparative studies are impractical, but how do we know if we don’t try? What processes like the ‘crit’ does is open creators — teams or individuals — to feedback, alternative perspectives and new ideas that could prevent poor or weak ideas from moving forward.

Bringing evaluation into the design process is a way to do this.

Going past the hype cycle

Gartner has popularized the concept of the hype cycle, which illustrates how ‘hot’ ideas, technologies and other innovations get over-sold, under-appreciated and eventually adopted in a more realistic manner relative to their impact over time.

 

1200px-Gartner_Hype_Cycle.svg.png

Gartner Hype Cycle (source: Wikimedia Commons)

 

Design thinking is most likely somewhere past the peak of inflated expectations, but still near the top of the curve. For designers like Natasha Jen, design thinking is well into the Trough of Disillusionment (and may never escape). Design thinking is currently stuck in its ‘bullshit’ phase and until it embraces more openness into the processes used under its banner, attention to the skill required to design well, and evaluation of the outcomes that design thinking generates, outspoken designers like Jen will continue to be dissatisfied.

We need people like Jen involved in design thinking. The world could benefit from approaches to critical design that produces better, more humane and impactful products and services that benefit more people with less impact on the world. We could benefit greatly from having more people inspired to create and open to sharing their experience, expertise and diverse perspectives on problems. Design thinking has this promise if it open to applying some its methods to itself.

*argument implies that the other person was open to hearing my perspective, engage in dialogue, and provide counter-points to mine. This was not the case.

If you’re interested in learning more about what an evaluation-supported, critical, and impactful approach to design and design thinking could look like for your organization or problem, contact Cense and see how they can help you out. 

Image Credit: Author