design thinking

Leadership & Design Thinking: Missed Opportunities

A recent article titled ‘The Right Way to Lead Design Thinking’ gets a lot of things wrong not because of what it says, but because of the way it says it. If we are to see better outcomes from what we create we need to begin with talking about design and design thinking differently.

I cringed when I first saw it in my LinkedIn feed. There it was: The Right Way to Lead Design Thinking. I tend to bristle when I see broad-based claims about the ‘right’ or ‘wrong’ way to do something, particularly with something so scientifically bereft as design thinking. Like others, I’ve called out much of what is discussed as design thinking for what I see as simple bullshit.

To my (pleasant) surprise, this article was based on data, not just opinion, which already puts it in a different class than most other articles on design thinking, but that doesn’t earn it a free pass. In some fairness to the authors, the title may not be theirs (it could be an editor’s choice), but what comes afterward still bears some discussion less about what they say, but how they say it and what they don’t say. This post reflects some thoughts on this work.

How we talk about what we do shapes what we know and the questions we ask and design thinking is at a state where we need to be asking bigger and better questions of it.

Right and Wrong

The most glaring critique I have of the article is the aforementioned title for many reasons. Firstly, the term ‘right’ assumes that we know above all how to do something. We could claim this if we had a body of work that systematically evaluated the outcomes associated with leadership and design thinking or research examining the process of doing design thinking. The issue is: we don’t.

There isn’t a definition of design thinking that can be held up for scrutiny to test or evaluate so how can we claim the ‘right’ way to do it? The authors link to a 2008 HBR article by Tim Brown that outlines design thinking as its reference source, however, that article provides scant concrete direction for measurement or evaluation, rather it emphasizes thinking and personality approaches to addressing design problems and a three-factor process model of how it is done in practice. These might be useful as tools, but they are not something you can derive indicators (quantitative or qualitative) to inform a comparison.

The other citation is a 2015 HBR article from Jon Kolko. Kolko is one of design’s most prolific scholars and one of the few who actively and critically writes about the thinking, doing, craft, teaching, and impact of design on the people, places, and systems around us. While his HBR article is useful in painting the complexity that besets the challenge of designers doing ‘design thinking’, it provides little to go from in developing the kind of comparative metrics that can inform a statement to say something is ‘right’ or ‘wrong’. It’s not fit for that purpose (and I suspect was never designed for that in the first place).

Both of these reference sources are useful for those looking to understand a little about what design thinking might be and how it could be used and few are more qualified to speak on such things as Tim Brown and Jon Kolko. But if we are to start taking design thinking seriously, we need to go beyond describing what it is and show what it does (and doesn’t do) and under what conditions. This is what serves as the foundation for a real science of practice.

The authors do provide a description of design thinking later in the article and anchors that description in the language of empathy, something that has its own problems.

Designers seek a deep understanding of users’ conditions, situations, and needs by endeavoring to see the world through their eyes and capture the essence of their experiences. The focus is on achieving connection, even intimacy, with users.

False Empathy?

Connecting to ideas and people

It’s fair to say that Apple and the Ford Motor Company have created a lot of products that people love (and hate) and rely on every day. They also weren’t always what people asked for. Many of those products were not designed for where people were, but they did shape where they went afterward. Empathizing with their market might not have produced the kind of breakthroughs like the iPod or automobile.

Empathy is a poor end in itself and the language used in this article treats it as such. Seeing the world through others’ eyes helps you gain perspective, maybe intimacy, but that’s all it does. Unless you are willing to take this into a systems perspective and recognize that many of our experiences are shared, collective, connected, and also disconnected then you only get one small part of the story. There is a risk that we over-emphasize the role that empathy plays in design. We can still achieve remarkable outcomes that create enormous benefit without being empathic although I think most people would agree that’s not the way we would prefer it. We risk confusing the means and ends.

One of the examples of how empathy is used in design thinking leadership takes place at a Danish hospital heart clinic where the leaders asked: “What if the patient’s time were viewed as more important than the doctor’s?” Asking this question upended the way that many health professionals saw the patient journey and led to improvements to a reduction in overnight stays. My question is: what did this produce?

What did this mean for the healthcare system as a whole? How about the professionals themselves? Are patients healthier because of the more efficient service they received? Who is deriving the benefits of this decision and who is bearing the risk and cost? What do we get from being empathic?

Failure Failings

Failure is among the most problematic of the words used in this article. Like empathy, failure is a commonly used term within popular writing on innovation and design thinking. The critique of this term in the article is less about how the authors use it explicitly, but that it is used at all. This may be as much a matter of the data itself (i.e., if you participants speak of it, therefore it is included in the dataset), however, its profile in the article is what is worth noting.

The issue is a framing problem. As the authors report from their research: “Design-thinking approaches call on employees to repeatedly experience failure”. Failure is a binary concept, which is not useful when dealing with complexity — something that Jon Kolko writes about in his article. If much of what we deal with in designing for human systems is about complexity, why are we anchoring our discussion to binary concepts such as ‘success’ and ‘failure’?

Failure exists only when we know what success looks like. If we are really being innovative, reframing the situation, getting to know our users (and discarding our preconceptions about them), how is it that we can fail? I have argued that the only thing we can steadfastly fail at in these conditions is learning. We can fail to build in mechanisms for data gathering, sensemaking, sharing, and reflecting that are associated with learning, but otherwise what we learn is valuable.

Reframing Our Models

The very fact that this article is in the Harvard Business Review suggests much about the intended audiences for this piece. I am sympathetic to the authors and my critique has focused on the details within the expression of the work, not necessarily the intent or capacity of those that created it. However, choices have consequences attached and the outcome of this article is that the framing of design thinking is in generating business improvements. Those are worthy goals, but not the only ones possible.

One of the reasons concepts like ‘failure’ apply to so much of the business literature is that the outcomes are framed in binary or simple terms. It is about improvement, efficiency, profit, and productivity. Business outcomes might also include customer satisfaction, purchase actions, or brand recognition. All of these benefit the company, not necessarily the customer, client, patient, person, or citizen.

If we were truly tackling human-centred problems, we might approach them differently and ask different questions. Terms like failure actually do apply within the business context, not because they support innovation per se, but because the outcomes are pre-set.

Leadership Roles

Bason and Austin’s research is not without merit for many reasons. Firstly, it is evidence-based. They have done the work by interviewing, synthesizing, commenting on, and publishing the research. That in itself makes it a worthy contribution to the field.

It also provides commentary and insight on some practical areas of design leadership that readers can take away right away by highlighting roles for leaders.

One of these roles is in managing the tension between divergent and convergent thought and development processes in design work. This includes managing the insecurities that many design teams may express in dealing with the design process and the volume of dis-organized content it can generate.

The exemplary leaders we observed ensured that their design-thinking project teams made the space and time for diverse new ideas to emerge and also maintained an overall sense of direction and purpose. 

Bason & Austin, HBR 2019

Another key role of the design leader is to support future thinking. By encouraging design teams to explore and test their work in the context of what could be, not just what is, leaders reframe the goals of the work and the outcomes in ways that support creativity.

Lastly, a key strength of the piece was the encouragement of multi-media forms of engagement and feedback. The authors chose to illustrate how leaders supported their teams in thinking differently about not only the design process but the products for communicating that process (and resulting products) to each other and the outside world. Too often the work of design is lost in translation because the means of communication have not been designed for the outcomes that are needed — something akin to design-driven evaluation.

Language, Learning, Outcomes

By improving how we talk about what we do we are better at framing how to ask questions about what we do and what impact it has. Doing the right thing means knowing what the wrong this is. Without evaluation, we run the risk in Design of doing what Russell Ackoff cautioned against: Doing the wrong things righter.

A read between the lines of the data — the stories and examples — that were presented in the article by Bason and Austin is the role of managing fear — fear of ‘failure’, fear from confusion, fear of not doing good work. Design, if it is anything, is optimistic in that it is about making an effort to try and solve problems, taking action, and generating something that makes a difference. Design leadership is about supporting that work and bringing it into our organizations and making it accessible.

That is an outcome worth striving for. While there are missed opportunities here, there is also much to build on and lead from.

Lead Photo by Quino Al on Unsplash

Inset Photo by R Mo on Unsplash

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

public healthstrategic foresight

Futuring the Past

Flat Earth to Measles: Did We See That Coming?
In the first month of 2019 the United States saw more measles cases than it did in all of 2010. This disease of the past was once on its way to extinction (or deep hibernation) is now a current public health threat, which prompts us to think: how can our futuring better consider what we came from not just what it might lead to?

Measles was something that my parents worried about for me and my brothers more than forty years ago. Measles is one of those diseases that causes enormous problems that are both obvious and also difficult to see until they manifest themselves down the road. Encephalitis and diarrhea are two possible short-term effects, while a compromised immune system down the road is some of the longer-term effects. It’s a horrible condition, one of the most infectious diseases we know of, and also one that was once considered to be ‘eliminated’ from the United States,Canada and most of the Americas (which means existing in such small numbers as not worthy of large-scale monitoring).

In the first month of 2019 there have been more measles cases tracked than all of 2010. The causes of this are many, but largely attributable to a change in vaccination rates among the public. The fewer people who get vaccinated, the more likely the disease will find a way to take hold in the population — first of those who aren’t protected, but over time this will include some of those who are because of the ‘herd protection’ nature of how vaccination works.

Did We See That Coming?

Measles hasn’t featured prominently in any of the foresight models of the health system that I’ve seen over the course of my career. Then again, twenty-five years ago, it would have been unlikely that any foresight model of urban planning would have emphasized scooters or bicycles — old technologies — over the automobile as modes of transportation likely to shape our cities. Yet, here we are.

Today, those interested in the future of transportation are focused on autonomous cars, yet there is some speculation that the car — or at least the one we know now — will disappear altogether. Manufacturers like Ford — the company that invented the mass-market automobile — have already decided they will abandon most of their automobile production in the next few years.

The hottest TV show (or rather, streamed media production) among those under the age of 20? Friends (circa 1994).

Are you seeing a trend here?

What we are seeing is a resurgence of the past in pockets all throughout our society. The implications of this are many for those who develop or rely on futurist-oriented models to shape their work.

One might argue that a good model of the future always assumes this and therefore it isn’t a flaw of the model, but rather that, as William Gibson was quoted as suggesting: the future is here, it’s just not evenly distributed yet. The Three Horizon Framework popularized by McKinsey has this assumption built into it from the beginning. But it’s not just the model that might be problematic, but the thinking behind it.

Self-Fulfilling Futures

Foresight is useful for a number of things, but I would argue very little of that benefit is what many futurists claim. The arguments for investing in foresight is that, by thinking about what the future could bring we can better prepare ourselves for that reality in our organizations. This might mean identifying different product lines, keeping an eye out for trends that match our predictions, improving our innovation systems and “the impact of decision-making“.

Why is the case? The answer — as I’ve been told by foresight and futurist colleagues — is that by seeing what is coming we can prepare for it, much like a weather forecast allows us to dress appropriately for the day to account for the possibility of rain or snow.

The critique I have with this line of thinking is that: do we ever go back and see where our models fit and didn’t fit? Are foresight models open to evaluation? I would argue: no. There is no systematic evaluation of foresight initiatives. This is not to suggest that evaluation needs to concern itself with whether a model gets everything right — that the future turns out just as we anticipated — but whether it was actually useful.

Did we make a better decision because we saw a possible future? Did we restructure our organization to achieve something that would have been impossible had we not had the strategic foresight to guide us? These are the claims and yet we do not have evidence to support it. Such little evaluation of these models has left us open to clinging to myths and also to an absence of critical reflection on what use these models have (and also a wasted opportunity to consider what use they could have).

Yes, the case of Royal Dutch Shell and its ability to envision problems with the global oil supply chain in the late 1960’s and early 70’s through adopting a foresight approach gave them a step up on their competitors. But how many other cases of this nature are there? Where is the evidence that this approach does what it’s proponents claim it does? With foresight being adopted across industries we should have many examples of its impact, but we do not.

Layering Influence and Impact

Let’s bring it back to public health. There is enormous evidence to point to the role of tobacco use and lifetime prevalence of a litany of health problems like cancer and cardiovascular disease, yet there are still millions who use tobacco daily. Lack of retirement savings is a clear pathway to significant problems for health, wellbeing, and lifestyle down the road. The effects of human behaviour on the environment and our health have been known for decades (or millennia, depending on your perspective) to the point where we are now referring to this stage of planetary evolution as the Anthropocene (the age where humans influence the planet).

We can see things coming in various degrees of focus and yet the influence on our behaviour is not certain. Indeed, the anticipation of future consequences is only one element of a large array of factors that influence our behaviour. Psychologists, the group that studies and support the evidence for behaviour change, have shown that we are actually pretty bad at predicting what will happen, how we will react to something, and what will influence change.

Many of these factors are systemic — that is tied to the systems we are a part of. This is our team, family, organization, community, and society, and time — the various spheres outlined in Bronfenbrenner’s Social-Ecological Model. This model outlines the various ‘rings’ or spheres that influence us, including time (which encompasses them all). It’s this last ring that we often forget. This model can be useful because it showcases layers of impact and influence, including from our past.

Decision Making in the Past

By anchoring ourselves to the future and not considering our past, our models for prediction, forecasting, and foresighting are limited. We are equally limited when we use the same form of thinking (about the future) to make our models about the past. In this case, I think of Andrew Yang who recently spoke to the Freakonomics podcast and pointed out how our economic thinking is rooted in past models that we would never accept today. He’s wrong — sort of. We do accept this and it is alive and well in many of our ways of thinking about the future.

In speaking about how we’ve been through economic patterns of disruption, he points out that we are using an old fact pattern to inform what we do now as if the economy — which we invented just a few hundred years ago — has these immutable laws.


The fantasists — and they are so lazy and it makes me so angry, because people who are otherwise educated literally wave their hands and are like, “Industrial Revolution, 120 years ago. Been through it before,” and, man, if someone came into your office and pitched you an investment in a company based on a fact pattern from 120 years ago, you’d freakin’ throw them out of your office so fast.

Andrew Yang, speaking on Freakonomics

Foresight would benefit from the same kind of critical examination of itself as Yang does with the economy and our ways of thinking about it. That critical examination includes using real evidence to make decisions where we have it and where we don’t have it – we establish it.

Maybe then, we might anticipate that measles are not gone. Let’s keep our eye out on polio, too. And as for a flat earth? Don’t sail too far into the sunset as you might fall off if we don’t factor that into our models of the future.

Image Credit: “Flat Earth | Conspiracy Theory VOL.1” by Daniel Beintner is licensed under CC BY-NC-ND 4.0. To view a copy of this license, visit: https://creativecommons.org/licenses/by-nc-nd/4.0

psychology

The Developmental Psychology of Organizations

Organizations start change somewhere

Every living thing has a journey that starts somewhere and ends eventually. Our ability to see this, understand it, and apply what we know about how humans grow and develop (as individuals and organizations) is what helps us determine how this journey unfolds and where it ends up.

The psychology of individuals is a complicated affair that involves understanding a variety of matters from personal and family history, genetics, cultural context, education, and social situating. While all of these contribute to who we are as people, the degree of influence and mix is different from person to person. It means that we are all a product of a collection of forces that combine together in various ways that make understanding how we change a challenge because of this holistic complexity.

For example, some of us might have behaviours and preferences associated with a certain personality type (extroverted and introverted) and find that quality to be relatively stable across the lifespan. While there are times we might exhibit qualities of another type, those are more situational than stable. For those who are more of an ambivert, identification with a particular preference might be more challenging. Whatever investment you place in this kind of personality assessment, what is important is that the stability and consistency of certain characteristics are what largely shapes our identity to others (and ourselves). It’s what makes us ‘us’.

From Individuals to Organizations

It has been argued that organizations exhibit much of the same kind of characteristic habits on their own while providing an aggregation of the characteristics of those within them and leading them to various degrees. Personality theory has been applied to organizational behaviour as a means of understanding how it is that certain actions, activities, habits, and patterns form from within organizations and their implications. This involves taking ideas developed for individuals and applying them to groups and the implications of this are considerable.

If we are to consider organizations similar to humans seriously, it can have significant implications for the way in which we engage in organizational change efforts. Much of the research on organizational change is tied to the development and implementation of a strategy. Strategy, in most conventional applications, is an expression of intent manifest through specific choices of focus and action. This approach rests largely on a cognitive rational model of change (pdf) where information (e.g., data, ‘facts’, perceptions, beliefs, and opinion) guides an assessment of the situation that forms the basis for a plan of action. The idea is that we see and learn things and plan and act according to that knowledge.

Most individual behaviour change models are founded on this approach that has thinking preceding action in a relatively rational, logical manner based on an objective assessment of the facts and evidence (with some emotional contributions here and there to make life interesting). So if we tie organizational change to the similar kind of mechanisms and models that we use to understand individuals, should we not apply similar modes of change facilitation? We do — but its how we do it that might be the problem.

Change Theory to Change Reality

One of the most vexing (and little discussed) issues for behavioural scientists is that the application of the cognitive rational model to personal, organizational, and social change has a rather unimpressive track record. A look at how people change finds that relatively little change comes from rationally reviewing a threat or opportunity and planning out a strategy (nevermind executing the planned strategy as envisioned). Even when the effects are modest, factors such as the match between the person, technique or intervention approach, and the problem being addressed continues to mediate the outcomes.

What happens when our theories and our practices don’t really work? Or at least don’t work as well as we think they do?

The answer — using the very argument that we are looking to disprove — is that we will address the matter as many individuals might: disagreement, resistance, and denial.

The field of organizational decision-making and innovation is littered with case studies that show how, in the face of overwhelming evidence to the contrary, organizations (like many individuals) resist change. Whether it was the speed at which those on the Titanic accepted the fact that their ship would sink after hitting the iceberg (nevermind the perception that the ship was invulnerable, to begin with) or companies who persist with a strategy that doesn’t match with changing times (e.g., Kodak and it’s photographic film business, Sears and its retail model), the inability to see, unwillingness to perceive or accept changing situations has led to major problems.

These problems are a matter of failing to change or adapt. To quote from The Leopard:

If want things to stay as they are, things will have to change

Change is something we need to do even if that is simply to maintain the status quo.

Person-Centred Organizational Change

Erik Eriksen, the Austrian-American psychoanalyst whose work focused on identity formation and development, was among the few to challenge the belief that people’s essential character was immutable and resistant to change. (The dominant view was that thinking and behaviour could change, but not ‘how one was’ as a person). He did, however, acknowledge that our ability to change who we are was not easy and takes a lifetime. This flies in the face of the dominant thinking in Western societies that we can make dramatic changes in an instant.

While talk-shows and popular self-books are filled with stories of dramatic transformation and inspiration about how you can change everything in an instant, the truth is that these cases are outliers (and often exaggerations) or misrepresentations. Much like the artist who ‘breaks out’ and becomes an ‘overnight sensation’ the journey to stardom is usually a long one that follows a Pareto distribution (that is a long, slow climb over time followed by very quick punctuation at the end). What is misread into these success stories is that the rapid change is a factor of a long, protracted build-up.

While there are some things that do follow this pattern much change is also linear and progressive. We see this in the work of another Ericsson: Anders Ericsson. His work is widely cited (and mis-cited) as being behind the ‘10,000 Hour Rule’ that suggests that expertise — a change from an unskilled novice to a skilled expert — is developed over that much time of practice. While the time itself is important, what is often missed in the citation of this work is that the key is on deliberative practice (pdf), which makes all the difference.

If we extrapolate from the work of both Eriksen/Ericsson’s we might develop a model of behaviour change that looks quite different than we have at present. Instead of trying 5-year plans, strategic goals, and inspirational visions of the future, we might be better off delving into an organization’s past, it’s formation, it’s core beliefs and personality, and spend more time looking at what it is already doing than what it seeks to do.

Developmental Organizations

We might then find what it seeks to deliberate on day-in-and-out and emphasize the ways in which to amplify the feedback that helps people learn deliberately and consistently. We might take these lessons — much like those small, tiny adjustments that expert violinists, athletes, and surgeons make to hone their craft — and make them visible and build on them. We would look upon organizations as developing organizations using approaches that fit with them developmentally (e.g., developmental evaluation). We would treat organizations like we would people.

Which is kind of funny because organizations are made of people. That’s some change.

Photo by Stanislav Kondratiev on Unsplash

evaluationsocial systems

Baby, It’s Cold Outside (and Other Evaluation Lessons)

Competing desires or imposing demands?

The recent decision by many radio stations to remove the song “Baby, It’s Cold Outside” from their rotation this holiday season provides lessons on culture, time, perspective, and ethics beyond the musical score for those interested in evaluation. The implications of these lessons extend far beyond any wintery musical playlist. 

As the holiday season approaches, the airwaves, content streams, and in-store music playlists get filled with their annual turn toward songs of Christmas, the New Year, Hanukkah, and the romance of cozy nights inside and snowfall. One of those songs has recently been given the ‘bah humbug’ treatment and voluntarily removed from playlists, initiating a fresh round of debates (which have been around for years) about the song and its place within pop culture art. The song, “Baby, It’s Cold Outside” was written in 1944 and has been performed and recorded by dozens of duets ever since. 

It’s not hard for anyone sensitive to gender relations to find some problematic issues with the song and the defense of it on the surface, but it’s once we get beneath that surface that the arguments become more interesting and complicated. 

One Song, Many Meanings

One of these arguments has come from jazz vocalist Sophie Millman, whose take on the song on the CBC morning radio show Metro Morning was that the lyrics are actually about competing desires within the times, not a work about predatory advances.

Others, like feminist author Cammila Collar, have gone so far to describe the opposition to the song as ‘slut shaming‘. 

Despite those points (and acknowledging some of them), others suggest that the manipulative nature of the dialogue attributed to the male singer is a problem no matter what year the song was written. For some, the idea that this was just harmless banter overlooks the enormous power imbalance between genders then and now when men could impose demands on women with fewer implications. 

Lacking a certain Delorean to go back in time to fully understand the intent and context of the song when it was written and released, I came to appreciate that this is a great example of some of the many challenges that evaluators encounter in their work. Is “Baby, It’s Cold Outside” good or bad for us? Like with many situations evaluators encounter: it depends (and depends on what questions we ask). 

Take (and Use) the Fork

Yogi Berra famously suggested (or didn’t) that “when you come across a fork in the road, take it.” For evaluators, we often have to take the fork in our work and the case of this song provides us with a means to consider why.

A close read of the lyrics and a cursory knowledge of the social context of the 1940s suggests that the arguments put forth by Sophie Millman and Cammila Collar have some merit and at least warrant plausible consideration. This might just be a period piece highlighting playful, slightly romantic banter between a man and woman on a cold winter night. 

At the same time, what we can say with much more certainty is that the song agitates many people now. Lydia Liza and Josiah Lemanski revised the lyrics to create a modern, consensual take on the song, which has a feel that is far more in keeping with the times. This doesn’t negate the original intent and interpretation of the lyrics, rather it places the song in the current context (not a historical one) and that is important from an evaluative standpoint.

If the intent of the song is to delight and entertain then what once worked well now might not. In evaluation terms, we might say the original merit of the song may hold based on historical context, its worth has changed considerably within the current context.

We may, as Berra might have said, have to take the fork and accept two very different understandings within the same context. We can do this by asking some specific questions. 

Understanding Contexts

Evaluators typically ask of programs (at least) three questions: What is going on? What’s new? and What does it mean? In the case of Baby, It’s Cold Outside, we can see that the context has shifted over the years, meaning that no matter how benign the original intent, the potential for misinterpretation or re-visioning of the intent in light of current times is worth considering.

What is going on is that we are seeing a lot of discussion about the subject matter of a song and what it means in our modern society. This issue is an attractor for a bigger discussion of historical treatment, inequalities, and the language and lived experience of gender.

The fact that the song is still being re-recorded and re-imagined by artists illustrates the tension between a historical version and a modern interpretation. It hasn’t disappeared and it may be more known now than ever given the press it receives.

What’s new is that society is far more aware of the scope and implications of gender-based discrimination, violence, and misogyny in our world than before. It’s hard to look at many historical works of art or expression without referencing the current situation in the world. 

When we ask about what it means, that’s a different story. The myriad versions of the song are out there on records, CD’s, and through a variety of streaming sources. While it might not be included in a few major outlets, it is still available. It is also possible to be a feminist and challenge gender-based violence and discrimination and love or leave the song. 

The two perspectives may not be aligned explicitly, but they can be with a larger, higher-level purpose of seeking empowerment and respect for women. It is this context of tension that we can best understand where works like this live. 

This is the tension in which many evaluations live when dealing with human services and systems. There are many contexts and we can see competing visions and accept them both, yet still work to create a greater understanding of a program, service, or product. Like technology, evaluations aren’t good or bad, but nor are they neutral. 

Image credit MGM/YouTube via CBC.ca

Note: The writing article happened to coincide with the anniversary of the horrific murder of 14 women at L’Ecole Polytechnique de Montreal. It shows that, no matter how we interpret works of art, we all need to be concerned with misogyny and gender-based violence. It’s not going away.  

education & learningevaluation

Learning: The Innovators’ Guaranteed Outcome

Innovation involves bringing something new into the world and that often means a lot of uncertainty with respect to outcomes. Learning is the one outcome that any innovation initiative can promise if the right conditions are put into place. 

Innovation — the act of doing something new to produce value — in human systems is wrought with complications from the standpoint of evaluation given that the outcomes are not always certain, the processes aren’t standardized (or even set), and the relationship between the two are often in an ongoing state of flux. And yet, evaluation is of enormous importance to innovators looking to maximize benefit, minimize harm, and seek solutions that can potentially scale beyond their local implementation. 

Non-profits and social innovators are particularly vexed by evaluation because there is an often unfair expectation that their products, services, and programs make a substantial change to social issues such as poverty, hunger, employment, chronic disease, and the environment (to name a few). These are issues that are large, complex, and for which no actor has complete ownership or control over, yet require some form of action, individually and collectively. 

What is an organization to do or expect? What can they promise to funders, partners, and their stakeholders? Apart from what might be behavioural or organizational outcomes, the one outcome that an innovator can guarantee — if they manage themselves right — is learning

Learning as an Outcome

For learning to take place, there need to be a few things included in any innovation plan. The first is that there needs to be some form of data capture of the activities that are undertaken in the design of the innovation. This is often the first hurdle that many organizations face because designers are notoriously bad at showing their work. Innovators (designers) need to capture what they do and what they produce along the way. This might include false starts, stops, ‘failures’, and half-successes, which are all part of the innovation process. Documenting what happens between idea and creation is critical.

Secondly, there needs to be some mechanism to attribute activities and actions to indicators of progress. Change only can be detected in relation to something else so, in the process of innovation, we need to be able to compare events, processes, activities, and products at different stages. Some of the selection of these indicators might be arbitrary at first, but as time moves along it becomes easier to know whether things like a stop or start are really just ‘pauses’ or whether they really are pivots or changes in direction. 

Learning as organization

Andrew Taylor and Ben Liadsky from Taylor Newberry Consulting recently wrote a great piece on the American Evaluation Association’s AEA 365 blog outlining a simple approach to asking questions about learning outcomes. Writing about their experience working with non-profits and grantmakers, they comment on how evaluation and learning require creating a culture that supports the two in tandem:

Given that organizational culture is the soil into which evaluators hope to plant seeds, it may be important for us to develop a deeper understanding of how learning culture works and what can be done to cultivate it.

What Andrew and Ben speak of is the need to create the environment for which learning can occur at the start. Some of that is stirred by asking some critical questions as they point out in their article. These include identifying whether there are goals for learning in the organization and what kind of time and resources are invested to regularly gathering people together to talk about the work that is done. This is the third big part of evaluating for learning: create the culture for it to thrive. 

Creating Consciousness

It’s often said that learning is a natural as breathing, but if that were true much more would be gained from innovation than there is. Just like breathing, learning can take place passively and can be manipulated or controlled. In both cases, there is a need to create a consciousness around what ‘lessons’ abound. 

Evaluation serves to make the unconscious, conscious. By paying attention — being mindful — of what is taking place and linking that to innovation at the level of the organization (not just the individual) evaluation can be a powerful tool to aid the process of taking new ideas forward. While we cannot always guarantee that a new idea will transform a problem into a solution, we can ensure that we learn in our effort to make change happen. 

The benefit of learning is that it can scale. Many innovations can’t, but learning is something that can readily be added to, built on, and transforms the learner. In many ways, learning is the ultimate outcome. So next time you look to undertake an innovation, make sure to evaluate it and build in the kind of questions that help ensure that, no matter what the risks are, you can assure yourself a positive outcome. 

Image Credit: Rachel on Unsplash

education & learningevaluation

The Quality Conundrum in Evaluation

lex-sirikiat-469013-unsplash

One of the central pillars of evaluation is assessing the quality of something, often described as its merit. Along with worth (value) and significance (importance), assessing the merit of a program, product or service is one of the principal areas that evaluators focus their energy.

However, if you think that would be something that’s relatively simple to do, you would be wrong.

This was brought home clearly in a discussion I took part in as part of a session on quality and evaluation at the recent conference of the American Evaluation Association entitled: Who decides if it’s good? How? Balancing rigor, relevance, and power when measuring program quality. The conversation session was hosted by Madeline Brandt and Kim Leonard from the Oregon Community Foundation, who presented on some of their work in evaluating quality within the school system in that state.

In describing the context of their work in schools, I was struck by some of the situational variables that came into play such as high staff turnover (and a resulting shortage among those staff that remain) and the decision to operate some schools on a four-day workweek instead of five as a means of addressing shortfalls in funding. I’ve since learned that Oregon is not alone in adopting the 4-day school week; many states have begun experimenting with it to curb costs. The argument is, presumably, that schools can and must do more with less time.

This means that students are receiving up to one fifth less classroom time each week, yet expecting to perform at the same level as those with five days. What does that mean for quality? Like much of evaluation work, it all depends on the context.

Quality in context

The United States has a long history of standardized testing, which was instituted partly as a means of ensuring quality in education. The thinking was that, with such diversity in schools, school types, and populations there needed to be some means to compare the capabilities and achievement across these contexts. A standardized test was presumed to serve as a means of assessing these attributes by creating a benchmark (standard) to which student performance could be measured and compared.

While there is a certain logic to this, standardized testing has a series of flaws embedded in its core assumptions about how education works. For starters, it assumes a standard curriculum and model of instruction that is largely one-size-fits-all. Anyone who has been in a classroom knows this is simply not realistic or appropriate. Teachers may teach the same material, but the manner in which it is introduced and engaged with is meant to reflect the state of the classroom — it’s students, physical space, availability of materials, and place within the curriculum (among others).

If we put aside the ridiculous assumption that all students are alike in their ability and preparedness to learn each day for a minute and just focus on the classroom itself, we already see the problem with evaluating quality by looking back at the 4-day school week. Four-day weeks mean either that teachers are creating short-cuts in how they introduce subjects and are not teaching all of the material they have or they are teaching the same material in a compressed amount of time, giving students less opportunity to ask questions and engage with the content. This means the intervention (i.e., classroom instruction) is not consistent across settings and thus, how could one expect things like standardized tests to reflect a common attribute? What quality education means in this context is different than others.

And that’s just the variable of time. Consider the teachers themselves. If we have high staff turnover, it is likely an indicator that there are some fundamental problems with the job. It may be low pay, poor working conditions, unreasonable demands, insufficient support or recognition, or little opportunity for advancement to name a few. How motivated, supported, or prepared do you think these teachers are?

With all due respect to those teachers, they may be incompetent to facilitate high-quality education in this kind of classroom environment. By incompetent, I mean not being prepared to manage compressed schedules, lack of classroom resources, demands from standardized tests (and parents), high student-teacher ratios, individual student learning needs, plus fitting in the other social activities that teachers participate in around school such as clubs, sports, and the arts. Probably no teachers have the competency for that. Those teachers — at least the ones that don’t quit their job — do what they can with what they have.

Context in Quality

This situation then demands new thinking about what quality means in the context of teaching. Is a high-quality teaching performance one where teachers are better able to adapt, respond to the changes, and manage to simply get through the material without losing their students? It might be.

Exemplary teaching in the context of depleted or scarce resources (time, funding, materials, attention) might look far different than if conducted under conditions of plenty. The learning outcomes might also be considerably different, too. So the link between the quality of teaching and learning outcomes is highly dependent on many contextual variables that, if we fail to account for them, will misattribute causes and effects.

What does this mean for quality? Is it an objective standard or a negotiated, relative one? Can it be both?

This is the conundrum that we face when evaluating something like the education system and its outcomes. Are we ‘lowering the bar’ for our students and society by recognizing outstanding effort in the face of unreasonable constraints or showing quality can exist in even the most challenging of conditions? We risk accepting something that under many conditions is unacceptable with one definition and blaming others for outcomes they can’t possibly achieve with the other.

From the perspective of standardized tests, the entire system is flawed to the point where the measurement is designed to capture outcomes that schools aren’t equipped to generate (even if one assumes that standardized tests measure the ‘right’ things in the ‘right’ way, which is another argument for another day).

Speaking truth to power

This years’ AEA conference theme was speaking truth to power and this situation provides a strong illustration of that. While evaluators may not be able to resolve this conundrum, what they can do is illuminate the issue through their work. By drawing attention to the standards of quality, their application, and the conditions that are associated with their realization in practice, not just theory, evaluation can serve to point to areas where there are injustices, unreasonable demands, and areas for improvement.

Rather than assert blame or unfairly label something as good or bad, evaluation, when done with an eye to speaking truth to power, can play a role in fostering quality and promoting the kind of outcomes we desire, not just the ones we get. In this way, perhaps the real measure of quality is the degree to which our evaluations do this. That is a standard that, as a profession, we can live up to and that our clients — students, teachers, parents, and society — deserve.

Image credit:  Lex Sirikiat