
Measurement can be a trick to deflect us from what matters most, which often can’t be measured.
I recently saw a LinkedIn post that was trending in the category #designthinking all about the importance of measurement. As one who believes strongly in the role of evaluation as part of design thinking I clicked hoping to find a peer who was advocating for thoughtful, scholarly approaches to understanding design thinking impact. Instead, the author focused on language that focused on standards, accountability, benchmarks, and rules that govern what she felt “gets measured, gets done.”
Our measures can guide our focus (for example, people will study for the test, not the learning, when they know they are getting assessed) so it matters much what we choose to measure.
Lies, Statistics, and Being WEIRD
Lies, Damned Lies, and Statistics — Unattributed
Statistics can be used to gloss over many things. When we pursue an evaluation agenda based on measures and metrics the risk is that we reduce our sense of context and meaning attached to that context because we only get answers to the questions we ask. When it comes to learning about something novel, we may not know what questions to ask in the first place. Or, we ask the same questions we did before.
That presents another risk: we ask the questions that fit with the biases we bring. Psychologists have begun to acknowledge how much of their research is WEIRD (pdf): based on Western (and white), Educated, Industrialized, Rich, and Democratic samples. Measurement constrains our understanding and we need to be very careful to know what constraints we’re imposing. The world is far from WEIRD and our data needs to reflect that.
Just consider the big issue of measuring racial identity. This is not — pun intended – a black and white issue; it’s incredibly complex. In asking a question about race we are forcing individuals self-select themselves into a set of categories which they may or may not identify. Race is only part of one’s identity and racial data is often conflated with geographic data (e.g., ‘African’ vs Black) and citizenship (e.g., Asian vs. Chinese) and culture — all which will have various levels of true fit and false distinctions.
The notion of labeling – by self or other — oneself by skin colour is enormously problematic. At the same time, when we don’t make these distinctions, we risk ignoring the real disparities that are created in society attributed to racial bias and social inequities. This has implications for everyone as we are seeing with COVID-19. Our choices for measurement allows us to tell lies and damned lies with our statistics — both what we report and what we can’t so we better be careful with what we ask and how we ask it.
Design thinking is no different. And because what we design shapes our world, the choices we make have great weight.
Ethics of Choice
Just as qualitative data is beset with issues of selective disclosure and social desirability bias, quantitative measures can reduce contextual information to the point of obscuring real meaning and value. Without this data, we can commit the errors of omission and commission caused by WEIRDness.
Returning to our first point: what are the implications of imposing standards, benchmarks, accountability, and rules to design thinking?
Firstly, who is creating those standards? Are we choosing something that’s based on WEIRD criteria?
Are our benchmarks based on what’s come before? If so, that’s likely a terrible place to start because, up until recently, most of the research on design thinking has been non-existent or terrible in quality.
Who are we accountable to? Is it our client? Or is it — as Mike Monteiro argues when it comes to digital design — the public?
Who is making up the rules? By my assessment, it’s not those who have disabilities. It’s not minority voices. It’s not those on the shop floor rather than the C-Suite. It’s not main street, but Silicon Valley.
Right now, it’s not clear we can measure things based on these rules. Maybe we need some different ways to evaluate design thinking.
Before we jump into measuring things, let’s be clear why we are doing it, for whom, and for what outcome. That might answer the question.