Tag: American Evaluation Association

design thinkingevaluation

Design-driven Evaluation

Fun Translates to Impact

A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility.

I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done.

This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton.

U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout.

This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation.

Beyond Utilization

Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses of similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures.

Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies.

Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets‘ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real.

Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow.

Designing for Humans

Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful.

A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process.

Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence).

What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation.

Building Design into Evaluation

There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start.

Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new.

Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation.

What Design-Driven Evaluation Looks Like

A design-driven evaluation takes as its premise a few key things:

  • Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations.
  • Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them?
  • Outcome and process oriented. Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation.
  • Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this.
  • Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation.
  • Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead.

Principles

Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues).

Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation.

There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close.

This is the first in a series of posts that will be forthcoming on design-driven evaluation. It’s a starting point and far from the end. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about its overall experience.

evaluationsocial innovation

E-Valuing Design and Innovation

5B247043-1215-4F90-A737-D35D9274E695

Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change? 

What is the value of design?

If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.

Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.

‘Valuating’ design, evaluating innovation

On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:

Program design is both a verb and a noun.

It is the process that organizations use to develop a program.  Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.

A program design is also the plan of action that results from that process.  Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.

One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.

The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.

We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.

658beggar_KeepCoinsChange

A bigger perspective on value creation

The process of placing or determining value* of a program is about looking at three things:

1. The plan (the program design);

2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);

3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).

Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.

A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.

One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.

Designing opportunities to learn more

Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.

In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.

Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.

Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’ Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.

Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.

Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.

When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.

*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.

Image credit: author

evaluationknowledge translation

A Call to Evaluation Bloggers: Building A Better KT System

Time To Get Online...

Are you an evaluator and do you blog? If so, the American Evaluation Association wants to hear from you. This CENSEMaking post features an appeal to those who evaluate, blog and want to share their tips and tricks for helping create a better, stronger KT system. 

Build a better moustrap and the world will beat a path to your door — Attributed to Ralph Waldo Emerson

Knowledge translation in 2011 is a lot different than it was before we had social media, the Internet and direct-to-consumer publishing tools. We now have the opportunity to communicate directly to an audience and share our insights in ways that go beyond just technical reports and peer-reviewed publications, but closer to sharing our tacit knowledge. Blogs have become a powerful medium for doing this.

I’ve been blogging for a couple of years and quite enjoy it. As an evaluator, designer, researcher and health promoter I find it allows me to take different ideas and explore them in ways that more established media do not. I don’t need to have the idea perfect, or fully formed, or relevant to a narrow audience. I don’t need to worry about what my peers think or my editor, because I serve as the peer review, editor and publisher all at the same time.

I originally started blogging to share ideas with students and colleagues — just small things about the strange blend of topics I engage in that many don’t know about or understand or wanted to know more of. Concepts like complexity, design thinking, developmental evaluation, and health promotion can get kind of fuzzy or opaque for those outside of those various fields.

Blogs enable us to reach directly to an audience and provide a means of adaptive feedback on ideas that are novel. Using the comments, visit statistics, and direct messages sent to me from readers, I can gain some sense of what ideas are being taken up with people and which one’s resonate. That enables me to tailor my messages and amplify those parts that are of greater utility to a reader, thus increasing the likelihood that a message will be taken up. For CENSEMaking, the purpose is more self-motivated writing rather than trying to assess the “best” messages for the audience, however I have a series of other blogs that I use for projects as a KT tool. These are, in many cases, secured and by invitation only to the project team and stakeholders, but still look and feel like any normal blog.

WordPress (this site) and Posterous are my favorite blogging platforms.

As a KT tool, blogs are becoming more widely used. Sites like Research Blogging are large aggregations of blogs on research topics. Others, like this one, are designed for certain audiences and topics — even KT itself, like the KTExchange from the Research Into Action Action initiative at the University of Texas and MobilizeThis! from the Research Impact Knowledge Mobilization group at York University.

The American Evaluation Association has an interesting blog initiative led by AEA’s Executive Director Susan Kistler called AEA365, which is a tip-a-day blog for evaluators looking to learn more about who and what is happening in their field. A couple of years ago I contributed a post on using information technology and evaluation and was delighted at the response it received. So it reaches people. It’s for this reason that AEA is calling out to evaluation bloggers to contribute to the AEA365 blog with recommendations and examples for how blogging can be used for communications and KT. AEA365 aims to create small-bite pockets of information that are easily digestible by its audience.

If you are interested in contributing, the template for the blog is below, with my upcoming contribution to the AEA365 blog posted below that.

By embracing social media and the power to share ideas directly (and done so responsibly), we have a chance to come closer to realizing the KT dream of putting more effective, useful knowledge into the hands of those that can use it faster and engage those who are most interested and able to use that information more efficiently and humanely.

Interested in submitting a post to the AEA365 blog? Contact the AEA365 curators at aea365@eval.org.

Template for aea365 Blogger Posts (see below for an example)

[Introduce yourself by name, where you work, and the name of your blog]

Rad Resource – [your blog name here]: [describe your blog, explain its focus including the extent to which it is related to evaluation, and tell about how often new content is posted]

Hot Tips – favorite posts: [identify 3-5 posts that you believe highlighting your blogging, giving a direct link and a bit of detail for each (see example)]

  • [post 1]
  • [post 2]
  • Etc.

Lessons Learned – why I blog: [explain why you blog – what you find useful about it and the purpose for your blog and blogging. In particular, are you trying to inform stakeholders or clients? Get new clients? Provide a public service? Help students?]

Lessons Learned: [share at least one thing you have learned about blogging since you started]

Remember – stay under 450 words total please!

My potential contribution (with a title I just made up): Cameron Norman on Making Sense of Complexity, Design, Systems and Evaluation: CENSEMaking

Rad Resource – [CENSEMaking]: CENSEMaking is a play on the name of my research and design studio consultancy and on the concept of sensemaking, something evaluators help with all the time. CENSEMaking focuses on the interplay of systems and design thinking, health promotion and evaluation and weaves together ideas I find in current social issues, reflections on my practice as well as the evidence used to inform it. I aspire to post on CENSEMaking 2-3 times per week, although because it is done in a short-essay format, find the time can be a challenge.

Hot Tips – favorite posts:

  • What is Developmental Evaluation? This post came from a meeting of working group with Michael Quinn Patton and was fun to write because the original exercise that led to the content (described in the post) was so fun to do. It also provided an answer to a question I get asked all the time.
  • Visualizing Evaluation and Feedback. I believe that the better we can visualize complexity the more feedback we provide, the greater the opportunities we have for engaging others, and more evaluations will be utilized. This post was designed to provoke thinking about visualization and illustrate how its been creatively used to present complex data in interesting and accessible ways. My colleague and CENSE partner Andrea Yip has tried to do this with a visually oriented blog on health promoting design, which provides some other creative examples of ways to make ideas more appealing and data feel simpler.
  • Developmental Design and Human Services. Creating this post has sparked an entire line of inquiry for me on bridging DE and design that has since become a major focus for my work. This post became the first step in a larger journey.

Lessons Learned – why I blog: CENSEMaking originally served as an informal means of sharing my practice reflections with students and colleagues, but has since grown to serve as a tool for knowledge translation to a broader professional and lay audience. I aim to bridge the sometimes foggy world that things like evaluation inhabit  — particularly developmental evaluation – and the lived world of people whom evaluation serves.

Lessons Learned: Blogging is a fun way to explore your own thinking about evaluation and make friends along the way. I never expected to meet so many interesting people because they reached out after reading a blog post of mine or made a link to something I wrote. This has also led me to learn about so many other great bloggers, too. Give a little, get a lot in return and don’t try and make it perfect. Make it fun and authentic and that will do.

___

** Photo by digitalrob70 used under Creative Commons License from Flickr