All posts by Cameron D. Norman

I am a designer, psychologist, educator, and strategist focused on innovation in human systems. I'm curious about the world around me and use my role as Principal and President of Cense Ltd. as a means of channeling that curiosity into ideas, questions, and projects that contribute to a better world.

evaluationsocial systems

Baby, It’s Cold Outside (and Other Evaluation Lessons)

Competing desires or imposing demands?

The recent decision by many radio stations to remove the song “Baby, It’s Cold Outside” from their rotation this holiday season provides lessons on culture, time, perspective, and ethics beyond the musical score for those interested in evaluation. The implications of these lessons extend far beyond any wintery musical playlist. 

As the holiday season approaches, the airwaves, content streams, and in-store music playlists get filled with their annual turn toward songs of Christmas, the New Year, Hanukkah, and the romance of cozy nights inside and snowfall. One of those songs has recently been given the ‘bah humbug’ treatment and voluntarily removed from playlists, initiating a fresh round of debates (which have been around for years) about the song and its place within pop culture art. The song, “Baby, It’s Cold Outside” was written in 1944 and has been performed and recorded by dozens of duets ever since. 

It’s not hard for anyone sensitive to gender relations to find some problematic issues with the song and the defense of it on the surface, but it’s once we get beneath that surface that the arguments become more interesting and complicated. 

One Song, Many Meanings

One of these arguments has come from jazz vocalist Sophie Millman, whose take on the song on the CBC morning radio show Metro Morning was that the lyrics are actually about competing desires within the times, not a work about predatory advances.

Others, like feminist author Cammila Collar, have gone so far to describe the opposition to the song as ‘slut shaming‘. 

Despite those points (and acknowledging some of them), others suggest that the manipulative nature of the dialogue attributed to the male singer is a problem no matter what year the song was written. For some, the idea that this was just harmless banter overlooks the enormous power imbalance between genders then and now when men could impose demands on women with fewer implications. 

Lacking a certain Delorean to go back in time to fully understand the intent and context of the song when it was written and released, I came to appreciate that this is a great example of some of the many challenges that evaluators encounter in their work. Is “Baby, It’s Cold Outside” good or bad for us? Like with many situations evaluators encounter: it depends (and depends on what questions we ask). 

Take (and Use) the Fork

Yogi Berra famously suggested (or didn’t) that “when you come across a fork in the road, take it.” For evaluators, we often have to take the fork in our work and the case of this song provides us with a means to consider why.

A close read of the lyrics and a cursory knowledge of the social context of the 1940s suggests that the arguments put forth by Sophie Millman and Cammila Collar have some merit and at least warrant plausible consideration. This might just be a period piece highlighting playful, slightly romantic banter between a man and woman on a cold winter night. 

At the same time, what we can say with much more certainty is that the song agitates many people now. Lydia Liza and Josiah Lemanski revised the lyrics to create a modern, consensual take on the song, which has a feel that is far more in keeping with the times. This doesn’t negate the original intent and interpretation of the lyrics, rather it places the song in the current context (not a historical one) and that is important from an evaluative standpoint.

If the intent of the song is to delight and entertain then what once worked well now might not. In evaluation terms, we might say the original merit of the song may hold based on historical context, its worth has changed considerably within the current context.

We may, as Berra might have said, have to take the fork and accept two very different understandings within the same context. We can do this by asking some specific questions. 

Understanding Contexts

Evaluators typically ask of programs (at least) three questions: What is going on? What’s new? and What does it mean? In the case of Baby, It’s Cold Outside, we can see that the context has shifted over the years, meaning that no matter how benign the original intent, the potential for misinterpretation or re-visioning of the intent in light of current times is worth considering.

What is going on is that we are seeing a lot of discussion about the subject matter of a song and what it means in our modern society. This issue is an attractor for a bigger discussion of historical treatment, inequalities, and the language and lived experience of gender.

The fact that the song is still being re-recorded and re-imagined by artists illustrates the tension between a historical version and a modern interpretation. It hasn’t disappeared and it may be more known now than ever given the press it receives.

What’s new is that society is far more aware of the scope and implications of gender-based discrimination, violence, and misogyny in our world than before. It’s hard to look at many historical works of art or expression without referencing the current situation in the world. 

When we ask about what it means, that’s a different story. The myriad versions of the song are out there on records, CD’s, and through a variety of streaming sources. While it might not be included in a few major outlets, it is still available. It is also possible to be a feminist and challenge gender-based violence and discrimination and love or leave the song. 

The two perspectives may not be aligned explicitly, but they can be with a larger, higher-level purpose of seeking empowerment and respect for women. It is this context of tension that we can best understand where works like this live. 

This is the tension in which many evaluations live when dealing with human services and systems. There are many contexts and we can see competing visions and accept them both, yet still work to create a greater understanding of a program, service, or product. Like technology, evaluations aren’t good or bad, but nor are they neutral. 

Image credit MGM/YouTube via CBC.ca

Note: The writing article happened to coincide with the anniversary of the horrific murder of 14 women at L’Ecole Polytechnique de Montreal. It shows that, no matter how we interpret works of art, we all need to be concerned with misogyny and gender-based violence. It’s not going away.  

education & learningevaluation

Learning: The Innovators’ Guaranteed Outcome

Innovation involves bringing something new into the world and that often means a lot of uncertainty with respect to outcomes. Learning is the one outcome that any innovation initiative can promise if the right conditions are put into place. 

Innovation — the act of doing something new to produce value — in human systems is wrought with complications from the standpoint of evaluation given that the outcomes are not always certain, the processes aren’t standardized (or even set), and the relationship between the two are often in an ongoing state of flux. And yet, evaluation is of enormous importance to innovators looking to maximize benefit, minimize harm, and seek solutions that can potentially scale beyond their local implementation. 

Non-profits and social innovators are particularly vexed by evaluation because there is an often unfair expectation that their products, services, and programs make a substantial change to social issues such as poverty, hunger, employment, chronic disease, and the environment (to name a few). These are issues that are large, complex, and for which no actor has complete ownership or control over, yet require some form of action, individually and collectively. 

What is an organization to do or expect? What can they promise to funders, partners, and their stakeholders? Apart from what might be behavioural or organizational outcomes, the one outcome that an innovator can guarantee — if they manage themselves right — is learning

Learning as an Outcome

For learning to take place, there need to be a few things included in any innovation plan. The first is that there needs to be some form of data capture of the activities that are undertaken in the design of the innovation. This is often the first hurdle that many organizations face because designers are notoriously bad at showing their work. Innovators (designers) need to capture what they do and what they produce along the way. This might include false starts, stops, ‘failures’, and half-successes, which are all part of the innovation process. Documenting what happens between idea and creation is critical.

Secondly, there needs to be some mechanism to attribute activities and actions to indicators of progress. Change only can be detected in relation to something else so, in the process of innovation, we need to be able to compare events, processes, activities, and products at different stages. Some of the selection of these indicators might be arbitrary at first, but as time moves along it becomes easier to know whether things like a stop or start are really just ‘pauses’ or whether they really are pivots or changes in direction. 

Learning as organization

Andrew Taylor and Ben Liadsky from Taylor Newberry Consulting recently wrote a great piece on the American Evaluation Association’s AEA 365 blog outlining a simple approach to asking questions about learning outcomes. Writing about their experience working with non-profits and grantmakers, they comment on how evaluation and learning require creating a culture that supports the two in tandem:

Given that organizational culture is the soil into which evaluators hope to plant seeds, it may be important for us to develop a deeper understanding of how learning culture works and what can be done to cultivate it.

What Andrew and Ben speak of is the need to create the environment for which learning can occur at the start. Some of that is stirred by asking some critical questions as they point out in their article. These include identifying whether there are goals for learning in the organization and what kind of time and resources are invested to regularly gathering people together to talk about the work that is done. This is the third big part of evaluating for learning: create the culture for it to thrive. 

Creating Consciousness

It’s often said that learning is a natural as breathing, but if that were true much more would be gained from innovation than there is. Just like breathing, learning can take place passively and can be manipulated or controlled. In both cases, there is a need to create a consciousness around what ‘lessons’ abound. 

Evaluation serves to make the unconscious, conscious. By paying attention — being mindful — of what is taking place and linking that to innovation at the level of the organization (not just the individual) evaluation can be a powerful tool to aid the process of taking new ideas forward. While we cannot always guarantee that a new idea will transform a problem into a solution, we can ensure that we learn in our effort to make change happen. 

The benefit of learning is that it can scale. Many innovations can’t, but learning is something that can readily be added to, built on, and transforms the learner. In many ways, learning is the ultimate outcome. So next time you look to undertake an innovation, make sure to evaluate it and build in the kind of questions that help ensure that, no matter what the risks are, you can assure yourself a positive outcome. 

Image Credit: Rachel on Unsplash

education & learningevaluation

The Quality Conundrum in Evaluation

lex-sirikiat-469013-unsplash

One of the central pillars of evaluation is assessing the quality of something, often described as its merit. Along with worth (value) and significance (importance), assessing the merit of a program, product or service is one of the principal areas that evaluators focus their energy.

However, if you think that would be something that’s relatively simple to do, you would be wrong.

This was brought home clearly in a discussion I took part in as part of a session on quality and evaluation at the recent conference of the American Evaluation Association entitled: Who decides if it’s good? How? Balancing rigor, relevance, and power when measuring program quality. The conversation session was hosted by Madeline Brandt and Kim Leonard from the Oregon Community Foundation, who presented on some of their work in evaluating quality within the school system in that state.

In describing the context of their work in schools, I was struck by some of the situational variables that came into play such as high staff turnover (and a resulting shortage among those staff that remain) and the decision to operate some schools on a four-day workweek instead of five as a means of addressing shortfalls in funding. I’ve since learned that Oregon is not alone in adopting the 4-day school week; many states have begun experimenting with it to curb costs. The argument is, presumably, that schools can and must do more with less time.

This means that students are receiving up to one fifth less classroom time each week, yet expecting to perform at the same level as those with five days. What does that mean for quality? Like much of evaluation work, it all depends on the context.

Quality in context

The United States has a long history of standardized testing, which was instituted partly as a means of ensuring quality in education. The thinking was that, with such diversity in schools, school types, and populations there needed to be some means to compare the capabilities and achievement across these contexts. A standardized test was presumed to serve as a means of assessing these attributes by creating a benchmark (standard) to which student performance could be measured and compared.

While there is a certain logic to this, standardized testing has a series of flaws embedded in its core assumptions about how education works. For starters, it assumes a standard curriculum and model of instruction that is largely one-size-fits-all. Anyone who has been in a classroom knows this is simply not realistic or appropriate. Teachers may teach the same material, but the manner in which it is introduced and engaged with is meant to reflect the state of the classroom — it’s students, physical space, availability of materials, and place within the curriculum (among others).

If we put aside the ridiculous assumption that all students are alike in their ability and preparedness to learn each day for a minute and just focus on the classroom itself, we already see the problem with evaluating quality by looking back at the 4-day school week. Four-day weeks mean either that teachers are creating short-cuts in how they introduce subjects and are not teaching all of the material they have or they are teaching the same material in a compressed amount of time, giving students less opportunity to ask questions and engage with the content. This means the intervention (i.e., classroom instruction) is not consistent across settings and thus, how could one expect things like standardized tests to reflect a common attribute? What quality education means in this context is different than others.

And that’s just the variable of time. Consider the teachers themselves. If we have high staff turnover, it is likely an indicator that there are some fundamental problems with the job. It may be low pay, poor working conditions, unreasonable demands, insufficient support or recognition, or little opportunity for advancement to name a few. How motivated, supported, or prepared do you think these teachers are?

With all due respect to those teachers, they may be incompetent to facilitate high-quality education in this kind of classroom environment. By incompetent, I mean not being prepared to manage compressed schedules, lack of classroom resources, demands from standardized tests (and parents), high student-teacher ratios, individual student learning needs, plus fitting in the other social activities that teachers participate in around school such as clubs, sports, and the arts. Probably no teachers have the competency for that. Those teachers — at least the ones that don’t quit their job — do what they can with what they have.

Context in Quality

This situation then demands new thinking about what quality means in the context of teaching. Is a high-quality teaching performance one where teachers are better able to adapt, respond to the changes, and manage to simply get through the material without losing their students? It might be.

Exemplary teaching in the context of depleted or scarce resources (time, funding, materials, attention) might look far different than if conducted under conditions of plenty. The learning outcomes might also be considerably different, too. So the link between the quality of teaching and learning outcomes is highly dependent on many contextual variables that, if we fail to account for them, will misattribute causes and effects.

What does this mean for quality? Is it an objective standard or a negotiated, relative one? Can it be both?

This is the conundrum that we face when evaluating something like the education system and its outcomes. Are we ‘lowering the bar’ for our students and society by recognizing outstanding effort in the face of unreasonable constraints or showing quality can exist in even the most challenging of conditions? We risk accepting something that under many conditions is unacceptable with one definition and blaming others for outcomes they can’t possibly achieve with the other.

From the perspective of standardized tests, the entire system is flawed to the point where the measurement is designed to capture outcomes that schools aren’t equipped to generate (even if one assumes that standardized tests measure the ‘right’ things in the ‘right’ way, which is another argument for another day).

Speaking truth to power

This years’ AEA conference theme was speaking truth to power and this situation provides a strong illustration of that. While evaluators may not be able to resolve this conundrum, what they can do is illuminate the issue through their work. By drawing attention to the standards of quality, their application, and the conditions that are associated with their realization in practice, not just theory, evaluation can serve to point to areas where there are injustices, unreasonable demands, and areas for improvement.

Rather than assert blame or unfairly label something as good or bad, evaluation, when done with an eye to speaking truth to power, can play a role in fostering quality and promoting the kind of outcomes we desire, not just the ones we get. In this way, perhaps the real measure of quality is the degree to which our evaluations do this. That is a standard that, as a profession, we can live up to and that our clients — students, teachers, parents, and society — deserve.

Image credit:  Lex Sirikiat

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

business

Strategy: Myths, fantasies, and reality

paul-skorupskas-59950-unsplash.jpg

A defining feature of sustained excellence in any enterprise is a good strategy — a vision and plan linked to the delivery of something of value, consistently. One of the big reasons many organizations fail to thrive is not just that they that have the wrong strategy, but that they don’t have one at all (but think they do). 

Strategy is all about perception.

Whether you think you have one or not is partly perceptive. Whether you are delivering a strategy in practice or not is also a matter of perception. Why? Because strategy is what links what you build your organization for, what you drive it toward, and what you actually achieve. Lots of organizations achieve positive results by happenstance (being at the right place at the right time). That kind of luck can happen to anyone, but it hardly constitutes a strategy.

Also, statements of intent are great for creating the perception of strategy because one can always say they are working toward something in the abstract, but without a clear sense of how intentions are connected to actions and those actions connected to outcomes, there really isn’t a strategy.

Do you have a strategy?

The best example of this is in the entertaining and instructive illustrative book ‘I Have a Strategy (No You Don’t)‘, Howell J. Malham Jr literally illustrates the problems that beset conversations about strategy as it chronicles two characters (Larry and Gary) talking about the subject and busting the myths associated with what strategy is and is not. One exchange between the two goes like this:

Larry: “Hey Gary, I was working a strategy to put a cookie back in a cookie jar but I tripped and fell and the cookie flew into my mouth instead. Good strategy, huh?

Gary: “That’s not a strategy. That’s a happy accident, Larry

The entire book is like this. One misconception after another is clarified through one character using the term strategy to mean something other than what it really is. These misconceptions, misuses, and mistakes with the concept of strategy may be why it is so poorly done in practice.

Malham’s work is my favourite on strategy because it encapsulates so many of the real-world conversations I witness (and have been a part of) for years with colleagues and clients alike. Too much conversation on strategy is about things that are not really about strategy at all like wishes, needs, or opportunities.

This isn’t to suggest that all outcomes are planned or connected to a strategy, but the absence of a strategy means you’re operating at the whim of chance, circumstance, and opportunism. This is hardly the stuff of inspiration and isn’t sustainable. Strategy is about connecting purpose, plans, execution, and delivery. Malham defines a strategy as having the following properties:

1. It has an intended purpose;
2. There is a plan;
3. There is a sequence of actions (interdependent events);
4. It leads toward a distinct, measurable goal

When combined with evaluation, organizations build a narrative and understanding of not only whether a strategy leads toward a goal, but what actions make a difference (and to what degree), what aspects of a plan fit and didn’t fit, and what outcomes emerge from the efforts (including those that were unintended).

A look at much of the discourse on strategy finds that many organizations not only don’t have strategic plans, they don’t even have plans.

Words and action

One of the biggest problems with “capital ‘S’ Strategy” (the kind espoused in management science) is that it is filled with jargon and, ironically, contributes greatly to the very lack of strategic thinking that it seeks to inspire. It’s one of the reasons I like Malham’s book: it cuts through the jargon. I used to work with a senior leader who used all the language of strategy in talks, presentations, and writing but was wholly incapable or unwilling to commit to a strategic direction when it came to discussing plans and actions for their organization.

Furthermore, it is only marginally useful if you develop a strategy and then don’t bother to evaluate it to see what happened, how, and to what effect. Without the action tied to strategy, it is no better than a wish list and probably no more useful than a New Years Resolution.

Those plans and linking them to action is why design is such an important — and sadly, highly neglected — part of strategy development. Design is that process of shifting how we see problems, explore possibilities, and create pathways that lead to solutions. Design is not theoretical, it is practical and without design doing design thinking is impotent.

Two A’s of Strategy: Adaptation vs Arbitrary

The mistake for organizations working in zones of high complexity (which is increasingly most of those working with human services) is assuming that strategy needs to be locked in place and executed blindly to be effective. Strategy is developed in and for a context and if that situation changes, the strategy needs to change, too. This isn’t about throwing it out but adapting.

Adaptive strategy is a means of innovating responsibly, but can also be a trap if those adaptations need to be built on data and experience, not spurious conclusions. Arbitrary decisions is what often is at the root of bad (or no) strategy.

Roger Martin is one of the brightest minds on strategy and has called out what he sees as sloppy use of the term adaptive strategy as a stand-in for arbitrary decision-making going so far as to call it a ‘cop-out’. One of the biggest problems is that strategy is often not viewed in systems terms, as part of an interconnected set of plans, actions, and evaluations made simultaneously, not sequentially.

Good strategy is not a set of steps, but a set of cascading choices that influence the operations and outcomes simultaneously. Strategy is also about being active, not passive, about what it means to design and create an organization.

Grasping strategy for what it is, not what we imagine it to be, can be a key factor in shaping not only what you do, but how well you do it. Having the kind of conversations like those in Howell J. Malham’s book is a means to get things moving. Taking action on those things is another.

 

Image credit: Photo by Paul Skorupskas on Unsplash

behaviour changebusinessdesign thinking

How do we sit with time?

IMG_1114.jpg

Organizational transformation efforts from culture change to developmental evaluation all depend on one ingredient that is rarely discussed: time. How do we sit with this and avoid the trap of aspiring for greatness while failing to give it the time necessary to make change a reality? 

Toolkits are a big hit with those looking to create change. In my years of work with organizations large and small supporting behaviour change, innovation, and community development there are few terms that light up people’s faces than hearing “toolkit”. Usually, that term is mentioned by someone other than me, but it doesn’t stop the palpable excitement at the prospect of having a set of tools that will solve a complex problem.

Toolkits work with simple problems. A hammer works well with nails. Drills are good at making holes. With enough tools and some expertise, you can build a house. Organizational development or social change is a complex challenge where tools don’t have the same the same linear effect. A tool — a facilitation technique, an assessment instrument, a visualization method — can support change-making, but the application and potential outcome of these tools will always be contextual.

Tools and time

My experience has been that people will go to great extents to acquire tools yet put little comparative effort to use them.  A body of psychological research has shown there are differences between goals, the implementation intentions behind them, and actual achievement of those goals. In other words: desiring change, planning and intending to make a change, and actually doing something are different.

Tools are proxies for this issue in many ways: having tools doesn’t mean they either get used or that they actually produce change. Anyone in the fitness industry knows that the numbers between those who try a workout, those who buy a membership to a club, and those who regularly show up to workout are quite different.

Or consider the Japanese term Tsundoku, which loosely translates into the act of acquiring reading materials and letting them pile up in one’s home without reading them.

But tools are stand-ins for something far more important and powerful: time.

The pursuit of tools and their use is often hampered because organizations do not invest in the time to learn, appropriately apply, refine, and sense-make the products that come through these tools.

A (false) artifact of progress

Bookshelf

Consider the book buying or borrowing example above: we calculate the cost of the book when really we ought to price out the time required to read it. Or, in the case of practical non-fiction, the cost to read it and apply the lessons from it.

Yet, consider a shelf filled with books before you providing the appearance of having the knowledge contained within despite any evidence that its contents have been read. This is the same issue with tools: once acquired it’s easier to assume the work is largely done. I’ve seen this firsthand with people doing what the Buddhist phrase decries:

“Do not confuse the finger pointing to the moon for the moon itself”

It’s the same confusion we see between having data or models and the reality they represent.

These things all represent artifacts of progress and a false equation. More books or data or better models do not equal more knowledge. But showing that you have more of something tangible is a seductive proxy. Time has no proxy; that’s the biggest problem.

Time just disappears, is spent, is used, or whatever metaphor you choose to use to express time. Time is about Kairos or Chronos, the sequence of moments or the moments themselves, but in either case, they bear no clear markers.

Creating time markers

There are some simple tricks to create the same accumulation effect in time-focused work — tools often used to support developmental evaluation and design. Innovation is as much about the process as it is the outcome when it comes to marking effort. The temptation is to focus on the products — the innovations themselves — and lose what was generated to get there. Here are some ways to change that.

  1. Timelines. Creating live (regular) recordings of what key activities are being engaged and connecting them together in a timeline is one way to show the journey from idea to innovation. It also provides a sober reminder of the effort and time required to go through the various design cycles toward generating a viable prototype.
  2. Evolutionary Staging. Document the prototypes created through photographs, video, or even showcasing versions (in the case of a service or policy where the visual element isn’t as prominent). This is akin to the March of Progress image used to show human evolution. By capturing these things and noting the time and timing of what is generated, you create an artifact that shows the time that was invested and what was produced from that investment. It’s a way to honour the effort put toward innovation.
  3. Quotas & Time Targets. I’m usually reluctant to prescribe a specific amount of time one should spend on reflection and innovation-related sensemaking, but it’s evident from the literature that goals, targets, and quotas work as effective motivators for some people. If you generate a realistic set of targets for thoughtful work, this can be something to aspire to and use to drive activity. By tracking the time invested in sensemaking, reflection, and design you better can account for what was done, but also create the marker that you can point to that makes time seem more tangible.

These are three ways to make time visible although it’s important to remember that the purpose isn’t to just accumulate time but to actually sit with it.

All the tricks and tools won’t bring the benefit of what time can offer to an organization willing to invest in it, mindfully. Except, perhaps, a clock.

Try these out with some simple tasks. Another is to treat time like any other resource: budget it. Set aside the time in a calendar by booking key reflective activities in just as you would anything else. To do this, and to keep to it, requires leadership and the organizational supports necessary to ensure that learning can take place. Consider what is keeping you from taking or making the time to learn, share those thoughts with your peers, and then consider how you might re-design what you do and how you do it to support that learning.

Take time for that, and you’re on your way to something better.

 

If you’re interested in learning more about how to do this practically, using data, and designing the conditions to support innovation, contact me. This is the kind of stuff that I do.