Design and evaluation are a two-way street and both fields of practice are wise to learn each others’ language
Design is what shapes our products and services and brings them to life. Great design is transformative.
Evaluation is the means to understand what is transformed, how it takes place, and what impact that transformation has on the world.
It therefore makes sense that the two fields get to know each other and speak their language. As I’ve argued before, designers need to know evaluation and evaluative thinking. Here, I make the case that evaluators need to understand design and design thinking.
This not a claim that designers must become evaluators and vice versa, but without the language and understanding of each, both fields are lesser (and increasingly less impactful as a result).
Design (Thinking) for Evaluators
One of the biggest issues professional evaluators face is that they are often tasked with understanding value from a program or product that is already designed. So the design flaws in service, quality, craftsmanship, and aesthetics are already baked in. In some cases, evaluators aren’t even aware of what’s ‘inside’ the design until they are long into their evaluation planning.
Without an understanding of what makes a good design and how that design comes about, evaluators are substantially handicapped in explaining the effects (expected and otherwise) and how they might relate to recommendations or standards of practice. How can we say that a product is performing well or some other key performance indicator (KPI) if we aren’t clear on what the design is intended to do and the degree to which it is ‘fit for purpose’?
The issue is even more salient for those who are seeking to understand innovation — learning transformed into value through doing something new. This is the domain of what is known as Developmental Evaluation. Most DE is done as a means to assist development of a program — evolution, adaptation, exaptation, growth and contraction over time– yet is still conceived of as apart from the actual design process even if it is intended to provide feedback on what is designed.
While some argue that evaluators need to remain detached from the product or service they are evaluating as a means of maintaining integrity, this distinction is largely meaningless when a product is being created based on feedback in a novel situation* (innovation). It is the feedback and the situation that is interdependent in innovation contexts and it is evaluative thinking — done well or poorly — that shapes it.
Having evaluators at the table means it will be done well.
*As so many social, organizational, and political contexts are changing faster and in a more integrated manner I assert that innovation can no longer be considered a fringe specialty area for evaluation, rather it is moving closer to the centre of the fields areas of requirement*.
Design-driven evaluation (DDE) is a broader approach that views evaluation as part of the design itself. Rather than anchoring feedback outside the design process, DDE seeks to embed evaluation inside it. It’s about creating a systematic means of providing feedback to designers on their process, prototypes and implementations. When done well, it provides the lifeblood for designers and instills discipline, timeliness and quality into the design process rather than leaving it to more arbitrary or haphazard feedback systems.
For this to work, the evaluators need to understand designers and their language (and processes).
Expanding our thinking of developmental evaluation into developmental design is one way. Learning about design thinking as a framework (not a rule or method) for understanding how products and services are created is another.
A third way and part of a bigger conversation is getting the field of evaluation to consider the notion of value as central to it’s core. The field of professional evaluation is strangely quiet on discussion on the matter of value (focusing more on things like merit, worth, significance without naming value as an outcome or process).
How what we evaluate is really about understanding what people value and how that value is created and this is a design issue.