Utility in Design-Driven Evaluation

Design-driven evaluation focuses attention on what organization’s use in their decision-making and learning to support the creation of their products, delivery of services, and the overall quality of their work. While ultimately practical, this focus on utility also introduces some difficult conversations about what organizations truly value, not just what they say.

To say something is design-driven is to imply that its emphasis is on the process of creating something of value for someone. In this case, the value is through evaluation and what it means. Value in this case is framed as utility asking: what is it for? (and for whom and under what context)?

These are the questions that designers ask of any service, product, or structure that they seek to apply their attention to. Designers might ask: “what are hiring [the thing being designed] to do?” These are simple questions that may provoke much discussion and can transform the way we approach the creation and maintenance of something moving forward.

A different take is to ask people to describe what they already do (and what they want) to frame the discussion of how to approach the design. This can lead us into a trap of the present moment. It keeps people framing their work in the context of language that supports their present identity and the conceptions (and misconceptions) associated with it, not necessarily where they want to go.

Evidence-based?

You show me an evidence-based human service organization and I’ll show you one that is lying to you (and maybe to themselves), is in deep denial, or is focused on a narrow, established scope of practice. Very few fit the last category (but they do exist), which leaves us with the unsettling reality that we are likely dealing in some level of bullshit — a deliberate misrepresentation of the facts to impress others and themselves (see Harry Frankfurt’s work on the subject (PDF).

This is not to say that these organizations don’t use evidence at all or care about its application, but that there are so many areas within that scope of work that are not based on solid or even superficial evidence that to describe something as ‘evidence-based’ is an over-reach at best, a lie at worst. The reasons for this deception are many, but among them is simply that there is not enough evidence available to inform many aspects of the work. It’s impossible to truly be evidence-based when dealing with areas of complexity, social innovation, or complex innovation.

Consider this: an organization seeks to develop an evidence-based program and spends weeks or months gathering and reviewing research. They may even collect some data on their own and synthesize the findings together to inform a recommendation based on evidence. At play is the evidence for the program, the evidence to support the design of the program (converting evidence developed in one context into actionable structures, procedures, and plans into another), the evidence to support the implementation of the designed program in a new context, and the evidence generated through evaluation of the design, delivery, and outcomes associated with the program.

That is a lot of evidence to consider. I’ve never seen a program come even close to having evidence that even reasonably fits all of these contexts, let alone strong evidence. Why? Because there are so many variables at play in the program context (e.g., design, delivery, fidelity, etc..) and the process of evidence generation itself (e.g., design, data availability, analysis, etc..).

Utility means looking at what people actually use, not just what they say they use. To illustrate, I worked with an organization that proudly claimed that they were both evidence-based and a learning organization. When I asked what evidence they used and how they learned I was told with much more modest confidence that staff typically read “one or two” research articles per month (and that was it — and this was in a highly volatile, transnational, multidisciplinary field of practice). They also said that they engaged in reflective practice by writing up case reports, which (if completed at all), usually took up to four months to prepare after a site visit to a particular site or event due to the other activities they had to do as part of the day-to-day work of the organization.

This organization did their best, but that best wasn’t anywhere enough if they truly wished to be a learning organization or evidence-based. Yet, because they insisted they were these things they also insisted on an evaluation design that fit that narrative. They had not designed their organization to be evidence-based or a real learning organization. A design-driven approach would have developed things that suited that context and perhaps pushed them a little further toward being the organization they saw themselves to be.

Another Way: Fit for Purpose

Why bring up evidence-based decision-making? The reason has much to do with defining what makes design-driven evaluation different from other forms of evaluation (even research). Design-driven evaluation is about generating evidence for use within specific contexts. It involves using design principles and strategies to uncover and understand those principles ahead of the evaluation being designed. It means designing not only the evaluation itself, but the manner in which it produces products and the means associated with decision-making based on those products.

It is about being fit-for-purpose.

Most published forms of evidence is developed independent of the context in which it is to be used. That’s the traditional model for science. We learn things in one setting (maybe a lab) and then move it out into other settings (e.g., a clinic), do more trials and then eventually develop a body of evidence that is used to generalize to other settings. This works reasonably well for problems and issues that are simple or complicated in their structure.

As a situation involves ever greater complexity, that ability to translate from one setting or context to another breaks down. This complexity might also influence what the purpose and expected outcomes are of a program within that context. For example, a community-based health promotion program may have a theory, even program logic model, and goals, but it will need to consider neighbourhood design, differences in resident needs, local history, and the availability of other programs and resources. The purpose in one neighbourhood might be to provide a backstop to a local organization that is having financial problems where in another neighbourhood it might be to provide a vehicle for local leaders to take action where there are no other alternatives.

Not Leaving Things to Chance

Developing a fit-for-purpose program is not something that should be left to chance, because chances are very likely it won’t happen. If good design improves the use, usability, and overall translation of knowledge. A look at how real evidence-based practice emerges comes down to ways in which the design — intended or not — of the knowledge, the exchange opportunities, relationships, and systems come together.

Design-driven evaluation seeks to remedy one of the fundamental problems within the evidence translation process: the poor fit of the evaluation (data, process, focus) for the implementation of its findings. It’s about not leaving it to chance with the hope that maybe someone will figure out how to use things, overcome poor usability, persist through confusion, and still make good use of an evaluation.

Is this the system we want? Or could we do better? My answer is ‘no’ to the first and ‘yes’ to the second. Design-driven evaluations can be the means to get us to that ‘yes’ because as things get more complicated and complex and the need for better data, improved decisions, and decisive action rises we need to make sure we don’t leave doing better to chance.

Photo by Jeff Sheldon on Unsplash

If you’re interested in better and doing design-driven evaluation, contact Cense via this link.

Discover more from Censemaking

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top