Tag: utilization-focused evaluation

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

complexitydesign thinkingemergenceevaluationinnovation

Evaluation and Design For Changing Conditions

Growth and Development

The days of creating programs, products and services and setting them loose on the world are coming to a close posing challenges to the models we use for designing and evaluation. Adding the term ‘developmental’ to both of these concepts with an accompanying shift in mindset can provide options moving forward in these times of great complexity.

We’re at the tail end of a revolution in product and service design that has generated some remarkable benefits for society (and its share of problems), creating the very objects that often define our work (e.g., computers). However, we are in an age of interconnectedness and ever-expanding complexity. Our disciplinary structures are modifying themselves, “wicked problems” are less rare

Developmental Thinking

At the root of the problem is the concept of developmental thought. A critical mistake made in comparative analysis — whether through data or rhetoric — is one that mistakenly views static things to moving things through the same lens. Take for example a tree and a table. Both are made of wood (maybe the same type of wood), yet their developmental trajectories are enormously different.

Wood > Tree

Wood > Table

Tables are relatively static. They may get scratched, painted, re-finished, or modified slightly, but their inherent form, structure and content is likely to remain constant over time. The tree is also made of wood, but will grow larger, may lose branches and gain others; it will interact with the environment providing homes for animals, hiding spaces or swings for small children; bear fruit (or pollen); change leaves; grow around things, yet also maintain some structural integrity that would allow a person to come back after 10 years and recognize that the tree looks similar.

It changes and it interacts with its environment. If it is a banyan tree or an oak, this interaction might take place very slowly, however if it is bamboo that same interaction might take place over a shorter time frame.

If you were to take the antique table shown above, take its measurements and record its qualities and  come back 20 years later, you will likely see an object that looks remarkably similar to the one you lefty. The time of initial observation was minimally relevant to the when the second observation was made. The manner by which the table was used will have some effect on these observations, but to a matter of degree the fundamental look and structure is likely to remain consistent.

However, if we were to do the same with the tree, things could look wildly different. If the tree was a sapling, coming back 20 years might find an object that is 2,3,4 times larger in size. If the tree was 120 years old, the differences might be minimal. It’s species, growing conditions and context matters a great deal.

Design for Development / Developmental Design

In social systems and particularly ones operating with great complexity, models of creating programs, policies and products that simply release into the world like a table are becoming anachronistic. Tables work for simple tasks and sometimes complicated ones, but not complex ones (at least, consistently). It is in those areas that we need to consider the tree as a more appropriate model. However, in human systems these “trees” are designed — we create the social world, the policies, the programs and the products, thus design thinking is relevant and appropriate for those seeking to influence our world.

Yet, we need to go even further. Designing tables means creating a product and setting it loose. Designing for trees means constantly adapting and changing along the way. It is what I call developmental design. Tim Brown, the CEO of IDEO and one of the leading proponents of design thinking, has started to consider the role of design and complexity as well. Writing in the current issue of Rotman Magazine, Brown argues that designers should consider adapting their practice towards complexity. He poses six challenges:

  1. We should give up on the idea of designing objects and think instead about designing behaviours;
  2. We need to think more about how information flows;
  3. We must recognize that faster evolution is based on faster iteration;
  4. We must embrace selective emergence;
  5. We need to focus on fitness;
  6. We must accept the fact that design is never done.
That last point is what I argue is the critical feature of developmental design. To draw on another analogy, it is about tending gardens rather than building tables.

Developmental Evaluation

Brown also mentions information flows and emergence. Complex adaptive systems are the way they are because of the diversity and interaction of information. They are dynamic and evolving and thrive on feedback. Feedback can be random or structured and it is the opportunity and challenge of evaluators to provide the means of collecting and organizing this feedback to channel it to support strategic learning about the benefits, challenges, and unexpected consequences of our designs. Developmental evaluation is a method by which we do this.
Developmental evaluators work with their program teams to advise, co-create, and sense-make around the data generated from program activities. Ideally, a developmental evaluator is engaged with program implementation teams throughout the process. This is a different form of evaluation that builds on Michael Quinn Patton’s Utilization Focused-Evaluation (PDF) methods and can incorporate much of the work of action research and participatory evaluation and research models as well depending on the circumstance.

Bringing Design and Evaluation Together

To design developmentally and with complexity in mind, we need feedback systems in place. This is where developmental design and evaluation come together. If you are working in social innovation, your attention to changing conditions, adaptation, building resilience and (most likely) the need to show impact is familiar to you. Developmental design + developmental evaluation, which I argue are two sides of the same coin, are ways to conceive of the creation, implementation, evaluation, adaptation and evolution of initiatives working in complex environments.
This is not without challenge. Designers are not trained much in evaluation. Few evaluators have experience in design. Both areas are familiarizing themselves with complexity, but the level and depth of the knowledge base is still shallow (but growing). Efforts like those put forth by Social Innovation Generation initiative and the Tamarack Institute for Community Engagement in Canada are good examples of places to start. Books like Getting to Maybe,  M.Q. Patton’s Developmental Evaluation, and Tim Brown’s Change by Design are also primers for moving along.
However, these are start points and if we are serious about addressing the social, political, health and environmental challenges posed to us in this age of global complexity we need to launch from these start points into something more sophisticated that brings these areas further together. The cross training of designers and evaluators and innovators of all stripes is a next step. So, too, is building the scholarship and research base for this emergent field of inquiry and practice. Better theories, evidence and examples will make it easier for all of us to lift the many boats needed to traverse these seas.
It is my hope to contribute to some of that further movement and welcome your thoughts on ways to build developmental thinking in social innovation and social and health service work

Image (Header) Growth by Rougeux

Image (Tree) Arbre en fleur by zigazou76

Image (Table) Table à ouvrage art nouveau (Musée des Beaux-Arts de Lyon) by dalbera

All used under licence.

art & designevaluation

Visualizing Evaluation and Feedback

Seeing Ourselves in the Mirror of Feedback

Evaluation data is not always made accessible and part of the reason is that it doesn’t accurately reflect the world that people see. To be more effective at making decisions based on data, creating the mirrors that allow us to visualize things in ways that reflect what programs see may be key. 

Program evaluation is all about feedback and generating the kind of data that can provide actionable instruction to improve, sustain or jettison program activities. It helps determine whether a program is doing what it claims to be doing, what kind of processes are underway within the context of the program, and what is generally “going on” when people engage with a particular activity. Whether a program actually chooses to use the data is another matter, but at least it is there for people to consider.

A utlization-focused approach to evaluation centres on making data actionable and features a set of core activities (PDF) that help boost the likelihood that data will actually be used. Checklists such as the one referenced from IDRC do a great job of showing the complicated array of activities that go into making useful, user-centred, actionable evaluation plans and data. It isn’t as simple as expressing intent to use evaluations, much more needs to go into the data in the first place, but also into the readiness of the organization in using the data.

What the method of UFE and the related research on its application does not do is provide explicit,  prescriptive methods for data collection and presentation. If it did, data visualization ought to be considered front and centre in the discussion.

Why?

If the data is complex, the ability for us to process the information generated from an evaluation might be limited if we are expecting to connect disparate concepts. David McCandless has made a career of taking very large, complex topics and finding ways to visualize results to provide meaningful narratives that people can engage with. His TED talk and books provide examples of how to use graphic design and data analytics to develop new visual stories through data that transcend the typical regression model or pie chart.

There is also a bias we have towards telling people things, rather than allowing them to discover things for themselves. Robert Butler makes the case for the “Colombo” approach to inviting people to discover the truth in data in the latest issue of the Economist’s Intelligent Life. He writes:

What we need to do is abandon the “information deficit” model. That’s the one that goes: I know something, you don’t know it, once you know what I know you will grasp the seriousness of the situation and change your behaviour accordingly. Greens should dump that model in favour of suggesting details that actually catch people’s interest and allow the other person to get involved.

Art — or at least visual data — is a means of doing this. By inviting conversation about data — much like art does — we invite participation, analysis and engagement with the material that not only makes it more meaningful, but also more likely to be used. It is hard to look at some of the visualizations at places like

At the very least, evaluators might want to consider ways to visualize data simply to improve the efficiency of their communications. To that end, consider Hans Rosling’s remarkably popular video produced by the BBC showing the income and health distributions of 200 countries over 200 years in four minutes. Try that with a series of graphs.

 

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking & Evaluation

Design Thinking Meets Evaluation (by Lumaxart, Creative Commons Licence)

This morning at the American Evaluation Association meeting in San Antonio I attended a session very near and dear to my heart: design thinking and evaluation.

I have been a staunch believer that design thinking ought to be one of the most prominent tools for evaluators and that evaluation ought to be one of the principal components of any design thinking strategy. This morning, I was with my “peeps”.

Specifically, I was with Ching Ching Yap, Christine Miller and Robert Fee and about 35 other early risers hoping to learn about ways in which the Savannah College of Art and Design (SCAD) uses design thinking in support of their programs and evaluations.

The presenters went through a series of outlines for design thinking and what it is (more on that in a follow-up post), but what I wanted to focus on here was the way in which evaluation and design thinking fits together more broadly.

Design thinking is an approach that encourages participatory engagement in planning and setting out objectives, as well as in ideation, development, prototyping, testing, and refinement. In evaluation terms, it is akin to action research and utilization-focused evaluation (PDF). But perhaps its most close correlate is with Developmental evaluation (DE). DE is an approach that uses complexity-science concepts to inform an iterative approach to evaluation that is centred on innovation, the discovery of something new (or adaptation of something into something else) and the application of that knowledge to problem solving.

Indeed, the speakers today positioned design thinking as a means of problem solving.

Evaluation , at least DE, is about problem solving by collecting the data used as a form of feedback to inform the next iteration of decision making. It also is a form of evaluation that is intimately connected to program planning.

What design thinking offers is a way to extend that planning in new ways that optimizes opportunities for feedback, new information, participation, and creative interaction. Design thinking approaches, like the workshop today, also focuses on people’s felt needs and experiences, not just their ideas. In our session today, six audience members were recruited to play the role of either three facets of a store clerk or three facets of a customer — the rational, emotional and executive mind of each. A customer comes looking for a solution to a home improvement/repair problem, not sure of what she needs, while the store clerk tries to help.

What this design-oriented approach does is greatly enhance the participant’s sense of the whole, what the needs and desires and fears both parties are dealing with, not just the executive or rational elements. More importantly, this strategy looks at how these different components might interact by simulating a condition in which they might play out. Time didn’t allow us to explore what might have happened had we NOT done this and just designed an evaluation to capture the experience, but I can confidently say that this exercise got me thinking about all the different elements that could and indeed SHOULD be considered if trying to understand and evaluate an interaction is desired.

If design thinking isn’t a core competency of evaluation, perhaps we might want to consider it.