Tag: Design Loft

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

design thinkingpsychologyresearch

Elevating Design & Design Thinking

 

ElevateDesignLoft2.jpgDesign thinking has brought the language of design into popular discourse across different fields, but it’s failings threaten to undermine the benefits it brings if they aren’t addressed. In this third post in a series, we look at how Design (and Design Thinking) can elevate themselves above their failings and match the hype with real impact. 

In two previous posts, I called out ‘design thinkers’ to get the practice out of it’s ‘bullshit’ phase, characterized by high levels of enthusiastic banter, hype, and promotion and low evidence, evaluation or systematic practice.

Despite the criticism, it’s time for Design Thinking (and the field of Design more specifically) to be elevated beyond its current station. I’ve been critical of Design Thinking for years: its popularity has been helpful in some ways, problematic in others.  Others have, too. Bill Storage, writing in 2012 (now unavailable), said:

Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design.

Bruce Nussbaum, who helped popularize Design Thinking in the early 2000’s called it a ‘failed experiment’, seeking to promote the concept of Creative Intelligence instead. While many have called for Design Thinking to die, it’s not going to happen anytime soon. Since first publishing a piece on Design Thinking’s problems five years ago the practice has only grown. Design Thinking is going to continue to grow, despite its failings and that’s why it matters that we pay attention to it — and seek to make it better.

Lack of quality control, standardization or documentation of methods, and evidence of impact are among the biggest problems facing Design Thinking if it is to achieve anything substantive beyond generating money for those promoting it.

Giving design away, better

It’s hard to imagine that the concepts of personality, psychosis, motivation, and performance measurement from psychology were once unknown to most people. Yet, before the 1980’s, much of the public’s understanding of psychology was confined to largely distorted beliefs about Freudian psychoanalysis, mental illness, and rat mazes. Psychology is now firmly ensconced in business, education, marketing, public policy, and many other professions and fields. Daniel Kahneman, a psychologist, won the Nobel Prize in Economics in 2002 for his work applying psychological and cognitive science to economic decision making.

The reason for this has much to do with George Miller who, as President of the American Psychological Association, used his position to advocate that professional psychology ‘give away’ its knowledge to ensure its benefits were more widespread. This included creating better means of communicating psychological concepts to non-psychologists and generating the kind of evidence that could show its benefits.

Design Thinking is at a stage where we are seeing similar broad adoption beyond professional design to these same fields of business, education, the military and beyond. While there has been much debate about whether design thinking as practiced by non-designers (like MBA’s) is good for the field as a whole, there is little debate that its become popular just as psychology has.

What psychology did poorly is that it gave so much away that it failed to engage other disciplines enough to support quality adoption and promotion and, simultaneously, managed to weaken itself as newfound enthusiasts pursued training in these other disciplines. Now, some of the best psychological practice is done by social workers and the most relevant research comes from areas like organizational science and new ‘sub-disciplines’ like behavioural economics, for example.

Design Thinking is already being taught, promoted, and practiced by non-designers. What these non-designers often lack is the ‘crit’ and craft of design to elevate their designs. And what Design lacks is the evaluation, evidence, and transparency to elevate its work beyond itself.

So what next?

BalloonsEgypt.jpg

Elevating Design

As Design moves beyond its traditional realms of products and structures to services and systems (enabled partly by Design Thinking’s popularity) the implications are enormous — as are the dangers. Poorly thought-through designs have the potential to exacerbate problems rather than solve them.

Charles Eames knew this. He argued that innovation (which is what design is all about) should be a last resort and that it is the quality of the connections (ideas, people, disciplines and more) we create that determine what we produce and their impact on the world. Eames and his wife Ray deserve credit for contributing to the elevation of the practice of design through their myriad creations and their steadfast documentation of their work. The Eames’ did not allow themselves to be confined by labels such as product designer, interior designer, or artist. They stretched their profession by applying craft, learning with others, and practicing what they preached in terms of interdisciplinarity.

It’s now time for another elevation moment. Designers can no longer be satisfied with client approval as the key criteria for success. Sustainability, social impact, and learning and adaptation through behaviour change are now criteria that many designers will need to embrace if they are to operate beyond the fields’ traditional domains (as we are now seeing more often). This requires that designers know how to evaluate and study their work. They need to communicate with their clients better on these issues and they must make what they do more transparent. In short: designers need to give away design (and not just through a weekend design thinking seminar).

Not every designer must get a Ph.D. in behavioural science, but they will need to know something about that domain if they are to work on matters of social and service design, for example. Designers don’t have to become professional evaluators, but they will need to know how to document and measure what they do and what impact it has on those touched by their designs. Understanding research — that includes a basic understanding of statistics, quantitative and qualitative methods — is another area that requires shoring up.

Designers don’t need to become researchers, but they must have research or evaluation literacy. Just as it is becoming increasingly unacceptable that program designers from fields like public policy and administration, public health, social services, and medicine lack understanding of design principles, so is it no longer feasible for designers to be ignorant of proper research methods.

It’s not impossible. Clinical psychologists went from being mostly practitioners to scientist-practitioners. Professional social workers are now well-versed in research even if they typically focus on policy and practice. Elevating the field of Design means accepting that being an effective professional requires certain skills and research and evaluation are now part of that skill set.

DesignWindow2.jpeg

Designing for elevated design

This doesn’t have to fall on designers to take up research — it can come from the very people who are attracted to Design Thinking. Psychologists, physicians, and organizational scientists (among others) all can provide the means to support designers in building their literacy in this area.

Adding research courses that go beyond ethnography and observation to give design students exposure to survey methods, secondary data analysis, ‘big data’, Grounded Theory approaches, and blended models for data collection are all options. Bring the behavioural and data scientists into the curriculum (and get designers into the curriculum training those professionals).

Create opportunities for designers to do research, publish, and present their research using the same ‘crit’ that they bring to their designs. Just as behavioural scientists expose themselves to peer review of their research, designers can do the same with their research. This is a golden opportunity for an exchange of ideas and skills between the design community and those in the program evaluation and research domains.

This last point is what the Design Loft initiative has sought to do. Now in its second year, the Design Loft is a training program aimed at exposing professional evaluators to design methods and tools. It’s not to train them as designers, but to increase their literacy and confidence in engaging with Design. The Design Loft can do the same thing with designers, training them in the methods and tools of evaluation. It’s but one example.

In an age where interdisciplinarity is spoken of frequently this provides the means to practically do it and in a way that offers a chance to elevate design much like the Eames’ did, Milton Glaser did, and how George Miller did for psychology. The time is now.

If you are interested in learning more about the Design Loft initiative, connect with Cense. If you’re a professional evaluator attending the 2017 American Evaluation Association conference in Washington, the Design Loft will be held on Friday, November 10th

Image Credit: Author