Innovating is about doing something new to produce value and this introduces substantial challenges for strategy and evaluation. Without knowing what you can expect, it’s hard to know what you have…or where you’re going.
Innovation is one of those terms that is used a lot without as much attention to what it means in specific terms. This lack of specificity might be fine for casual conversation, but the risks (and benefits) to innovation are great and require a level of detail that has a seriousness to it.
When you’re evaluating a new product or service, the risk of developing the wrong thing or leaving something out is substantial. False confidence in success could have detrimental effects on an expected return on investment. Lack of understanding of the effects of a product or service could lead to harm.
Evaluation and strategy in the context of innovation takes on new meaning because the implications are so great.
The investment in innovation requires considerable tolerance for risk, which is why many countries provide incentives to support it. In the case of many community-serving organizations, innovation is often seen as imperative given the complexity of many of our social issues.
Although Research & Development (R & D) is commonly associated with innovation, often the ‘R’ part of this work is focused on the development of the product or service, not the evaluation. Evaluation focuses our attention on the product or service as it is used, even if that is in an early, restricted setting. Evaluation provides insight into the actual use characteristics, and can be useful in supporting program design, but only if there is investment in it.
While it is difficult enough to balance all of the demands and costs associated with innovating, investing in evaluation early on can yield substantial benefits later on as have been highlighted elsewhere. However, it is precisely because many of the benefits are later in the development cycle that it is so easy to put off investing in evaluation early.
Yet it’s in the future where the biggest payoff comes. Just as you might see the glamour and glory that comes with being an innovator like the image above, that kind of success is often based on a simple set of outcomes (e.g., sales, profit, etc..). Those kind of metrics are easy to celebrate, but often disguise what the real value of something is.
Whether it’s in the form of social benefits, additional discoveries, or the role of your product or service as a catalyst for other change, the focus on the end product loses the story told along the way. By allowing evaluation into the process early, the opportunity to tell that story differently and better makes for a more interesting — and beneficial read. That is an innovation story worth writing.
What distinguishes design-driven evaluation from other types of utilization-focused evaluation or innovation development is that it views the evaluative act as part of a service offering. It’s not a shift in method, but in mindset.
Evaluation is the innovator’s secret advantage. Any sustained attempt to innovate is driven by good data and systems to make sense of this data. Some systems are better than other and sometimes the data collected is not particularly great, but if you look at any organization that consistently develops new products and services that are useful and attractive and you’ll see some commitment to evaluation.
It’s not a new way of evaluating things, it’s a new mindset for how we understand the utility of evaluation and its role in supporting sustained innovation and its culture within an organization. It does this by viewing evaluation as product on its own and as a service to the organization.
In both cases, the way we approach this kind of evaluation is the way we would approach designing a product and a service. It’s both.
Evaluation as a product
What does an evaluation of something produce? What is the product?
Design-driven evaluation can do both, but extends our understanding of what the product is. The evaluation itself — the process of paying attention to what is happening in the development and use of a product or service, selecting what is most useful and meaningful, and collecting data on those activities and outcomes — has distinctive value on its own.
Viewed as a product, an evaluation can serve as a part of the innovation itself. Consider the tools that we use to generate many of our innovations from Sharpie markers and Post-it Notes to our whiteboards and wheely chairs to Figma or Adobe Illustrator software to the Macbook Pro or HP Envy PC that we use to type on. The best tools are designed to serve the creative process. There are many markers, computers, software packages, and platforms, but the ones we choose are the ones that serve a purpose well for what we need and what we enjoy (and that includes factoring in constraints) — they are well-designed. Why should an evaluation — a tool in the service of innovation — be any different?
Just like the reams of sticky notes that we generate with ideas serve as a product of the process of designing something new (innovating), so can an evaluation serve this same function.
These products are not just functional, they are stable and often invoke a positive emotional appeal to them (e.g. they look good, feel good, help you to feel good, etc.). Exceptional products do this while being sustainable, accessible, (usually) affordable, and culturally and environmentally sensitive the environments in which they are deployed. The best products combine it all.
Evaluations can do this. A design-driven evaluation generates something that is not only useful and used, but attractive. It invites conversation, use and showcases what is done in the service of creating an innovation by design.
The principles of good product design — designing for use, attraction, interaction, and satisfaction — are applied to an evaluation using this approach. This means selecting methods and tools that fit this function and aesthetic (and doesn’t divorce the two). It means treating the evaluation design and what it generates (e.g., data) as a product.
Evaluation as a service
The other role of a design-driven evaluation is to treat it as a service and thus, design it as such.
User-centered, through understanding the user by doing qualitative research;
Co-creative, by involving all relevant stakeholders in the design process;
Sequencing, by partitioning a complex service into separate processes;
Evidencing, by visualizing service experiences and making them tangible;
Holistic, by considering touchpoints in a network of interactions and users.
If we consider these principles in the scope of an evaluation, what we’ll see is something very different than just a report or presentation. This approach to designing evaluation as a service means taking a more concerted effort to identify present and potential future uses of an evaluation and understanding the user at the outset and designing for their needs, abilities, and preferences.
It also involves considering how evaluation can integrate into or complement existing service offerings. For innovators, it positions evaluation as a means of making innovation happen as part of the process and making that process better and more useful.
This is beyond A/B testing or forms of ‘testing’ innovations to positioning evaluation as a service to those who are innovating. In developmental evaluations, this means designing evaluation activities — from the data collection through to the synthesis, sensemaking, application, and re-design efforts of a program — as a service to the innovation itself.
Designing a mindset
Design-driven evaluation requires a mindset of an innovator and designer with the discipline of an evaluator. It is a way of approaching evaluation differently and goes beyond simple use to true service. This is not an approach that is needed for every evaluation either. But if you want to generate better use of evaluation results, contribute to better innovations and decision making, generate real learning (not learning artifacts), then designing a mindset for evaluation that views it alongside the care and attention that goes into all the other products and services we engage in matters a great deal.
If we want better, more useful evaluations (and their designs) we need to think and act like designers.
Developmental evaluation is a powerful tool to support innovation, engaging communities, and foster deep learning. While it might be growing in popularity, increasingly in demand, and a key difference-maker for social and technological innovators it might also not be for you.
Developmental evaluation (DE) is an approach to evaluation that is designed to support innovation and gather data to make sense of things in a complex environment. It is a powerful tool full of promise and many traps and has become increasingly popular in the social, finance, and health sectors. Maybe it’s for you. Maybe it’s not.
Chances are, it’s not.
If you are looking to force an outcome, DE is not for you.
DE might be for you if you are confused, nervous, a little excited, and curious about what it is that you’re doing, how you can make it more sustainable and useful, and interested in working with complexity, not fighting against it.
If you are not interested in learning — really, truly learning — skip the DE and try something else. DE is only good for those individuals and organizations that are serious about learning. This might mean struggling with uncertainty, honestly reflecting on past actions (including all the false-starts, non-starts, rough starts, and bad finishes) and envisioning the future and challenging what you belief (and sometimes affirming beliefs, too). A DE prompts you to do all of this and if that’s not your thing, don’t get into DE.
If you know the end of the story with your innovation before you begin, DE is not for you either.
If the status quo is your thing, DE is not.
Therapists see this all the time. They encounter people who say: “I want to change” and then witness them fight, struggle, deny, and abandon efforts to do the work to make the change happen, because it’s far easier to ask for change than it is to do it. This is OK — this struggle is part of being human. But if you are unwilling to do the work, struggle with it, and truly learn from your efforts, DE is not for you.
If you have the best idea in the world and a plan to change the world with it, DE is probably not for you. DE might get to you to re-think parts of your plan or the whole thing. It’s going to make your expected outcomes less expected and gum up the nice, simple, but wrong picture.
If practice makes perfect, DE is not for you. If practice is more of a vocation like medicine or doing meditation — a way of doing the work — then that’s a different story. For a DE practitioner, it’s not about becoming great at something, an improved version of yourself or your organization, or the best in the world. It’s about learning, growing and evolving (see above).
If you want something fast, efficient, outcomes-driven, and evidence-based from top-to-bottom don’t even think about DE.
Want to be trendy? Do DE. It’s what the cool kids are doing in evaluation and if being cool is important to you – definitely get into DE. (Unless you don’t like putting in a lot of work to become proficient areas of complexity, social and organizational behaviour, many different aspects of evaluation, and even design).
Lazy? Uncommitted? Allergic to creativity? Undisciplined? Low energy? Have a low tolerance for ambiguity? Then DE is not for you.
If you’re looking for a direct plan, a clear pathway to improvement and betterment, and quantifiable outcomes, DE is not for you.
If innovation has a specific look, feel, ROI, and outcome then you need tools and strategies that will assess all of that – which means you should not engage in DE. DE will only disappoint you. You will be exposed to many things, including possibilities you’d never considered, but they very likely won’t fit your model because, if what you are doing is truly innovative, it’s never really been tried before.
If you are changing the game while playing it, the rules that started won’t apply to what happens when you finish. You can’t start playing chess and wind up playing volleyball and still seek to measure the movements of the Rook, Bishop, or Queen. If you’re not really into game-changing – the kind that’s not about hyperbole and catchphrases — DE is not for you.
If you are short on time, commitment, and resources to bring people together, take time to pause and truly reflect, sit with uncertainty, delight in surprise, exceed your expectations, and sometimes end up disappointed, DE isn’t for you.
If strategy is a plan that you stick to no matter what, then DE is not for you.
If you view relationships as transactions, rather than as opportunities to grow and transform, DE is most certainly not for you.
Innovation is about discovery. If you wish to work in ways that are aligned with natural development — the kind we see in our children, pets, gardens, communities, and ourselves – you might find yourself discovering a lot and DE can be a big help. If ‘discovery’ is a code-word for re-packaging what you already have or doing what you’ve always done (be honest with yourself), then DE is a big waste of time.
Can’t handle surprises? Run away from DE and use something else.
If you’re looking to just check off a box because you committed to doing that in your corporate plan, then make your life easy and give DE a pass. If you see organizations as living beings and wish to create value for others in a manner that is consistent with this perspective, then DE could be a powerful ally in that process.
DE is becoming popular, but it most certainly not everyone. Maybe not for you, either. Now, you have lots of reasons to show why you should try something else.
If you still think DE is for you after all this, let’s connect — because DE seems to suit us at Cense just fine and we can help it to suit you, too.
Design-driven evaluation focuses attention on what organization’s use in their decision-making and learning to support the creation of their products, delivery of services, and the overall quality of their work. While ultimately practical, this focus on utility also introduces some difficult conversations about what organizations truly value, not just what they say.
To say something is design-driven is to imply that its emphasis is on the process of creating something of value for someone. In this case, the value is through evaluation and what it means. Value in this case is framed as utility asking: what is it for? (and for whom and under what context)?
These are the questions that designers ask of any service, product, or structure that they seek to apply their attention to. Designers might ask: “what are hiring [the thing being designed] to do?” These are simple questions that may provoke much discussion and can transform the way we approach the creation and maintenance of something moving forward.
A different take is to ask people to describe what they already do (and what they want) to frame the discussion of how to approach the design. This can lead us into a trap of the present moment. It keeps people framing their work in the context of language that supports their present identity and the conceptions (and misconceptions) associated with it, not necessarily where they want to go.
You show me an evidence-based human service organization and I’ll show you one that is lying to you (and maybe to themselves), is in deep denial, or is focused on a narrow, established scope of practice. Very few fit the last category (but they do exist), which leaves us with the unsettling reality that we are likely dealing in some level of bullshit — a deliberate misrepresentation of the facts to impress others and themselves (see Harry Frankfurt’s work on the subject (PDF).
This is not to say that these organizations don’t use evidence at all or care about its application, but that there are so many areas within that scope of work that are not based on solid or even superficial evidence that to describe something as ‘evidence-based’ is an over-reach at best, a lie at worst. The reasons for this deception are many, but among them is simply that there is not enough evidence available to inform many aspects of the work. It’s impossible to truly be evidence-based when dealing with areas of complexity, social innovation, or complex innovation.
Consider this: an organization seeks to develop an evidence-based program and spends weeks or months gathering and reviewing research. They may even collect some data on their own and synthesize the findings together to inform a recommendation based on evidence. At play is the evidence for the program, the evidence to support the design of the program (converting evidence developed in one context into actionable structures, procedures, and plans into another), the evidence to support the implementation of the designed program in a new context, and the evidence generated through evaluation of the design, delivery, and outcomes associated with the program.
That is a lot of evidence to consider. I’ve never seen a program come even close to having evidence that even reasonably fits all of these contexts, let alone strong evidence. Why? Because there are so many variables at play in the program context (e.g., design, delivery, fidelity, etc..) and the process of evidence generation itself (e.g., design, data availability, analysis, etc..).
Utility means looking at what people actually use, not just what they say they use. To illustrate, I worked with an organization that proudly claimed that they were both evidence-based and a learning organization. When I asked what evidence they used and how they learned I was told with much more modest confidence that staff typically read “one or two” research articles per month (and that was it — and this was in a highly volatile, transnational, multidisciplinary field of practice). They also said that they engaged in reflective practice by writing up case reports, which (if completed at all), usually took up to four months to prepare after a site visit to a particular site or event due to the other activities they had to do as part of the day-to-day work of the organization.
This organization did their best, but that best wasn’t anywhere enough if they truly wished to be a learning organization or evidence-based. Yet, because they insisted they were these things they also insisted on an evaluation design that fit that narrative. They had not designed their organization to be evidence-based or a real learning organization. A design-driven approach would have developed things that suited that context and perhaps pushed them a little further toward being the organization they saw themselves to be.
Another Way: Fit for Purpose
Why bring up evidence-based decision-making? The reason has much to do with defining what makes design-driven evaluation different from other forms of evaluation (even research). Design-driven evaluation is about generating evidence for use within specific contexts. It involves using design principles and strategies to uncover and understand those principles ahead of the evaluation being designed. It means designing not only the evaluation itself, but the manner in which it produces products and the means associated with decision-making based on those products.
Most published forms of evidence is developed independent of the context in which it is to be used. That’s the traditional model for science. We learn things in one setting (maybe a lab) and then move it out into other settings (e.g., a clinic), do more trials and then eventually develop a body of evidence that is used to generalize to other settings. This works reasonably well for problems and issues that are simple or complicated in their structure.
As a situation involves ever greater complexity, that ability to translate from one setting or context to another breaks down. This complexity might also influence what the purpose and expected outcomes are of a program within that context. For example, a community-based health promotion program may have a theory, even program logic model, and goals, but it will need to consider neighbourhood design, differences in resident needs, local history, and the availability of other programs and resources. The purpose in one neighbourhood might be to provide a backstop to a local organization that is having financial problems where in another neighbourhood it might be to provide a vehicle for local leaders to take action where there are no other alternatives.
Not Leaving Things to Chance
Developing a fit-for-purpose program is not something that should be left to chance, because chances are very likely it won’t happen. If good design improves the use, usability, and overall translation of knowledge. A look at how real evidence-based practice emerges comes down to ways in which the design — intended or not — of the knowledge, the exchange opportunities, relationships, and systems come together.
Design-driven evaluation seeks to remedy one of the fundamental problems within the evidence translation process: the poor fit of the evaluation (data, process, focus) for the implementation of its findings. It’s about not leaving it to chance with the hope that maybe someone will figure out how to use things, overcome poor usability, persist through confusion, and still make good use of an evaluation.
Is this the system we want? Or could we do better? My answer is ‘no’ to the first and ‘yes’ to the second. Design-driven evaluations can be the means to get us to that ‘yes’ because as things get more complicated and complex and the need for better data, improved decisions, and decisive action rises we need to make sure we don’t leave doing better to chance.
A recent article titled ‘The Right Way to Lead Design Thinking’ gets a lot of things wrong not because of what it says, but because of the way it says it. If we are to see better outcomes from what we create we need to begin with talking about design and design thinking differently.
To my (pleasant) surprise, this article was based on data, not just opinion, which already puts it in a different class than most other articles on design thinking, but that doesn’t earn it a free pass. In some fairness to the authors, the title may not be theirs (it could be an editor’s choice), but what comes afterward still bears some discussion less about what they say, but how they say it and what they don’t say. This post reflects some thoughts on this work.
The most glaring critique I have of the article is the aforementioned title for many reasons. Firstly, the term ‘right’ assumes that we know above all how to do something. We could claim this if we had a body of work that systematically evaluated the outcomes associated with leadership and design thinking or research examining the process of doing design thinking. The issue is: we don’t.
There isn’t a definition of design thinking that can be held up for scrutiny to test or evaluate so how can we claim the ‘right’ way to do it? The authors link to a 2008 HBR article by Tim Brown that outlines design thinking as its reference source, however, that article provides scant concrete direction for measurement or evaluation, rather it emphasizes thinking and personality approaches to addressing design problems and a three-factor process model of how it is done in practice. These might be useful as tools, but they are not something you can derive indicators (quantitative or qualitative) to inform a comparison.
The other citation is a 2015 HBR article from Jon Kolko. Kolko is one of design’s most prolific scholars and one of the few who actively and critically writes about the thinking, doing, craft,teaching, and impact of design on the people, places, and systems around us. While his HBR article is useful in painting the complexity that besets the challenge of designers doing ‘design thinking’, it provides little to go from in developing the kind of comparative metrics that can inform a statement to say something is ‘right’ or ‘wrong’. It’s not fit for that purpose (and I suspect was never designed for that in the first place).
Both of these reference sources are useful for those looking to understand a little about what design thinking might be and how it could be used and few are more qualified to speak on such things as Tim Brown and Jon Kolko. But if we are to start taking design thinking seriously, we need to go beyond describing what it is and show what it does (and doesn’t do) and under what conditions. This is what serves as the foundation for a real science of practice.
The authors do provide a description of design thinking later in the article and anchors that description in the language of empathy, something that has its own problems.
Designers seek a deep understanding of users’ conditions, situations, and needs by endeavoring to see the world through their eyes and capture the essence of their experiences. The focus is on achieving connection, even intimacy, with users.
It’s fair to say that Apple and the Ford Motor Company have created a lot of products that people love (and hate) and rely on every day. They also weren’t always what people asked for. Many of those products were not designed for where people were, but they did shape where they went afterward. Empathizing with their market might not have produced the kind of breakthroughs like the iPod or automobile.
Empathy is a poor end in itself and the language used in this article treats it as such. Seeing the world through others’ eyes helps you gain perspective, maybe intimacy, but that’s all it does. Unless you are willing to take this into a systems perspective and recognize that many of our experiences are shared, collective, connected, and also disconnected then you only get one small part of the story. There is a risk that we over-emphasize the role that empathy plays in design. We can still achieve remarkable outcomes that create enormous benefit without being empathic although I think most people would agree that’s not the way we would prefer it. We risk confusing the means and ends.
One of the examples of how empathy is used in design thinking leadership takes place at a Danish hospital heart clinic where the leaders asked: “What if the patient’s time were viewed as more important than the doctor’s?” Asking this question upended the way that many health professionals saw the patient journey and led to improvements to a reduction in overnight stays. My question is: what did this produce?
What did this mean for the healthcare system as a whole? How about the professionals themselves? Are patients healthier because of the more efficient service they received? Who is deriving the benefits of this decision and who is bearing the risk and cost? What do we get from being empathic?
Failure is among the most problematic of the words used in this article. Like empathy, failure is a commonly used term within popular writing on innovation and design thinking. The critique of this term in the article is less about how the authors use it explicitly, but that it is used at all. This may be as much a matter of the data itself (i.e., if you participants speak of it, therefore it is included in the dataset), however, its profile in the article is what is worth noting.
The issue is a framing problem. As the authors report from their research: “Design-thinking approaches call on employees to repeatedly experience failure”. Failure is a binary concept, which is not useful when dealing with complexity — something that Jon Kolko writes about in his article. If much of what we deal with in designing for human systems is about complexity, why are we anchoring our discussion to binary concepts such as ‘success’ and ‘failure’?
Failure exists only when we know what success looks like. If we are really being innovative, reframing the situation, getting to know our users (and discarding our preconceptions about them), how is it that we can fail? I have argued that the only thing we can steadfastly fail at in these conditions is learning. We can fail to build in mechanisms for data gathering, sensemaking, sharing, and reflecting that are associated with learning, but otherwise what we learn is valuable.
Reframing Our Models
The very fact that this article is in the Harvard Business Review suggests much about the intended audiences for this piece. I am sympathetic to the authors and my critique has focused on the details within the expression of the work, not necessarily the intent or capacity of those that created it. However, choices have consequences attached and the outcome of this article is that the framing of design thinking is in generating business improvements. Those are worthy goals, but not the only ones possible.
One of the reasons concepts like ‘failure’ apply to so much of the business literature is that the outcomes are framed in binary or simple terms. It is about improvement, efficiency, profit, and productivity. Business outcomes might also include customer satisfaction, purchase actions, or brand recognition. All of these benefit the company, not necessarily the customer, client, patient, person, or citizen.
If we were truly tackling human-centred problems, we might approach them differently and ask different questions. Terms like failure actually do apply within the business context, not because they support innovation per se, but because the outcomes are pre-set.
Bason and Austin’s research is not without merit for many reasons. Firstly, it is evidence-based. They have done the work by interviewing, synthesizing, commenting on, and publishing the research. That in itself makes it a worthy contribution to the field.
It also provides commentary and insight on some practical areas of design leadership that readers can take away right away by highlighting roles for leaders.
One of these roles is in managing the tension between divergent and convergent thought and development processes in design work. This includes managing the insecurities that many design teams may express in dealing with the design process and the volume of dis-organized content it can generate.
The exemplary leaders we observed ensured that their design-thinking project teams made the space and time for diverse new ideas to emerge and also maintained an overall sense of direction and purpose.
Bason & Austin, HBR 2019
Another key role of the design leader is to support future thinking. By encouraging design teams to explore and test their work in the context of what could be, not just what is, leaders reframe the goals of the work and the outcomes in ways that support creativity.
Lastly, a key strength of the piece was the encouragement of multi-media forms of engagement and feedback. The authors chose to illustrate how leaders supported their teams in thinking differently about not only the design process but the products for communicating that process (and resulting products) to each other and the outside world. Too often the work of design is lost in translation because the means of communication have not been designed for the outcomes that are needed — something akin to design-driven evaluation.
Language, Learning, Outcomes
By improving how we talk about what we do we are better at framing how to ask questions about what we do and what impact it has. Doing the right thing means knowing what the wrong this is. Without evaluation, we run the risk in Design of doing what Russell Ackoff cautioned against: Doing the wrong things righter.
A read between the lines of the data — the stories and examples — that were presented in the article by Bason and Austin is the role of managing fear — fear of ‘failure’, fear from confusion, fear of not doing good work. Design, if it is anything, is optimistic in that it is about making an effort to try and solve problems, taking action, and generating something that makes a difference. Design leadership is about supporting that work and bringing it into our organizations and making it accessible.
That is an outcome worth striving for. While there are missed opportunities here, there is also much to build on and lead from.
A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.
I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.
U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.
Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.
Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.
Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.
Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.
Designing for Humans
Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.
A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.
Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).
What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.
Building Design into Evaluation
There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.
Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.
Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.
What Design-Driven Evaluation Looks Like
A design-driven evaluation takes as its premise a few key things:
Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.
Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).
Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.
There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.
This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.
Every living thing has a journey that starts somewhere and ends eventually. Our ability to see this, understand it, and apply what we know about how humans grow and develop (as individuals and organizations) is what helps us determine how this journey unfolds and where it ends up.
The psychology of individuals is a complicated affair that involves understanding a variety of matters from personal and family history, genetics, cultural context, education, and social situating. While all of these contribute to who we are as people, the degree of influence and mix is different from person to person. It means that we are all a product of a collection of forces that combine together in various ways that make understanding how we change a challenge because of this holistic complexity.
For example, some of us might have behaviours and preferences associated with a certain personality type (extroverted and introverted) and find that quality to be relatively stable across the lifespan. While there are times we might exhibit qualities of another type, those are more situational than stable. For those who are more of an ambivert, identification with a particular preference might be more challenging. Whatever investment you place in this kind of personality assessment, what is important is that the stability and consistency of certain characteristics are what largely shapes our identity to others (and ourselves). It’s what makes us ‘us’.
From Individuals to Organizations
It has been argued that organizations exhibit much of the same kind of characteristic habits on their own while providing an aggregation of the characteristics of those within them and leading them to various degrees. Personality theory has been applied to organizational behaviour as a means of understanding how it is that certain actions, activities, habits, and patterns form from within organizations and their implications. This involves taking ideas developed for individuals and applying them to groups and the implications of this are considerable.
If we are to consider organizations similar to humans seriously, it can have significant implications for the way in which we engage in organizational change efforts. Much of the research on organizational change is tied to the development and implementation of a strategy. Strategy, in most conventional applications, is an expression of intent manifest through specific choices of focus and action. This approach rests largely on a cognitive rational model of change (pdf) where information (e.g., data, ‘facts’, perceptions, beliefs, and opinion) guides an assessment of the situation that forms the basis for a plan of action. The idea is that we see and learn things and plan and act according to that knowledge.
Most individual behaviour change models are founded on this approach that has thinking preceding action in a relatively rational, logical manner based on an objective assessment of the facts and evidence (with some emotional contributions here and there to make life interesting). So if we tie organizational change to the similar kind of mechanisms and models that we use to understand individuals, should we not apply similar modes of change facilitation? We do — but its how we do it that might be the problem.
Change Theory to Change Reality
One of the most vexing (and little discussed) issues for behavioural scientists is that the application of the cognitive rational model to personal, organizational, and social change has a rather unimpressive track record. A look at how people change finds that relatively little change comes from rationally reviewing a threat or opportunity and planning out a strategy (nevermind executing the planned strategy as envisioned). Even when the effects are modest, factors such as the match between the person, technique or intervention approach, and the problem being addressed continues to mediate the outcomes.
What happens when our theories and our practices don’t really work? Or at least don’t work as well as we think they do?
The answer — using the very argument that we are looking to disprove — is that we will address the matter as many individuals might: disagreement, resistance, and denial.
The field of organizational decision-making and innovation is littered with case studies that show how, in the face of overwhelming evidence to the contrary, organizations (like many individuals) resist change. Whether it was the speed at which those on the Titanic accepted the fact that their ship would sink after hitting the iceberg (nevermind the perception that the ship was invulnerable, to begin with) or companies who persist with a strategy that doesn’t match with changing times (e.g., Kodak and it’s photographic film business, Sears and its retail model), the inability to see, unwillingness to perceive or accept changing situations has led to major problems.
These problems are a matter of failing to change or adapt. To quote from The Leopard:
If want things to stay as they are, things will have to change
Change is something we need to do even if that is simply to maintain the status quo.
Person-Centred Organizational Change
Erik Eriksen, the Austrian-American psychoanalyst whose work focused on identity formation and development, was among the few to challenge the belief that people’s essential character was immutable and resistant to change. (The dominant view was that thinking and behaviour could change, but not ‘how one was’ as a person). He did, however, acknowledge that our ability to change who we are was not easy and takes a lifetime. This flies in the face of the dominant thinking in Western societies that we can make dramatic changes in an instant.
While talk-shows and popular self-books are filled with stories of dramatic transformation and inspiration about how you can change everything in an instant, the truth is that these cases are outliers (and often exaggerations) or misrepresentations. Much like the artist who ‘breaks out’ and becomes an ‘overnight sensation’ the journey to stardom is usually a long one that follows a Pareto distribution (that is a long, slow climb over time followed by very quick punctuation at the end). What is misread into these success stories is that the rapid change is a factor of a long, protracted build-up.
If we extrapolate from the work of both Eriksen/Ericsson’s we might develop a model of behaviour change that looks quite different than we have at present. Instead of trying 5-year plans, strategic goals, and inspirational visions of the future, we might be better off delving into an organization’s past, it’s formation, it’s core beliefs and personality, and spend more time looking at what it is already doing than what it seeks to do.
We might then find what it seeks to deliberate on day-in-and-out and emphasize the ways in which to amplify the feedback that helps people learn deliberately and consistently. We might take these lessons — much like those small, tiny adjustments that expert violinists, athletes, and surgeons make to hone their craft — and make them visible and build on them. We would look upon organizations as developing organizations using approaches that fit with them developmentally (e.g., developmental evaluation). We would treat organizations like we would people.
Which is kind of funny because organizations are made of people. That’s some change.