
Design and Design Thinking are foundational pathways to change-making and innovation. How do we know what we’ve created is doing what we think it is and whether it’s making a positive difference?
Design and Design Thinking comprise the bedrock of good innovation. Design is the intentional creation of things to serve a purpose, and like anything we do, it can be done well or poorly. Design Thinking represents the ways of approaching the art and science of making things for a purpose and might include tools, techniques, perspectives, and a stance.
When we innovate, we’re designing something. So it goes that if we want to innovate well, we must design well.
How do we know? It’s time to bring in evaluation and how it fits. From here, I’ll refer to design as reflecting a broader practice that includes design thinking. I won’t treat them separately.
Evaluation Goals and Design
Evaluation systematically examines the processes, procedures, outputs, outcomes and impact of products, services, and policies on people and systems. That’s a mouthful. Basically, evaluation tells the story of what you do and what it achieves.
In the design context, evaluation tells the story of the making things and the impact of those things on the world. Alas, evaluation is also the most neglected aspect of the design. It’s rarely given the attention it deserves. Designers are not often rewarded for evaluation and don’t typically get trained in evaluation methods. They are given a brief and if they do their job well, their client is happy. That’s the metric.
Yet, that’s only part of the story. When we’re working on social issues, complex policy or systems matters, or those involving many stakeholders, there is much more at stake and more nuanced, complicated means to assess design’s role and influence.
Evaluation looks at process (the journey), outputs (what comes from the journey), outcomes (what is affected by the journey), and impact (what the implications are from our design including and beyond the product itself, including its contribution to a larger change).
The Design Journey and Thinking

Design Thinking is what we use to conceive of our products and guide our process of design. This is the design journey. The benefits to engaging with design thinking go beyond the outcomes and include a variety of outputs. For evaluation, I speak of 7 questions we can ask of design thinking:
- What do people learn in the process of engaging in design thinking?
- What new skills to people acquire, develop, or refine through design thinking?
- How are the lessons from engaging in design thinking applied to other subsequent products?
- What is the effect of design thinking on the mindset of those involved in a design-oriented project?
- How does the (co-)design process influence team development, cohesion, creativity, and innovation performance?
- What role does design thinking play in shaping the innovation culture (e.g., creation, execution, delivery, and evaluation) with an organization?
- How does design thinking contribute to the implementation of innovations?
We can also ask questions about the approach that is used. Design Thinking is not a singular thing, rather it follows a process that is articulated through many different models. Adherence to (and noted deviations from) the model is another evaluative metric. The degree of fidelity to the model (which is not required but can be instructive) can be important if implementing design thinking across an organization or context. It can allow us some means to compare how and whether things are implemented, to what to degree, and to what effect.
Outputs, Outcomes and Impact
Without a form of standardized measurement or agreed-upon process, there are no clear metrics for evaluation that are widely regarded as standard. This is not a bad thing as the flexibility behind the lack of standardization provides design thinking with its means of adapting to different situations. However, the cost is that the clear
Design thinking processes often have qualitative outputs and are typically intended to address complex problems, so measuring the outcomes can be challenging. However, there are several ways to evaluate the results of a design thinking process, combining both quantitative and qualitative methods:
- User Satisfaction: This can be measured through surveys, interviews, and user testing sessions. High user satisfaction generally indicates that the design thinking process has resulted in a product, service, or solution that meets the user’s needs and expectations.
- User Engagement: Metrics like usage frequency, session length, bounce rate, etc., can provide information on how well the solution is engaging users. Higher engagement often indicates a more successful design thinking process.
- Rate of Adoption: This is particularly relevant for new products or services. A higher rate of adoption can indicate that the design thinking process was successful in creating a solution that meets the users’ needs.
- Net Promoter Score (NPS): This widely-used metric can provide a quick snapshot of how likely users are to recommend your product or service to others, which can be a strong indicator of overall user satisfaction.
- Return on Investment (ROI): This is a clear quantitative measure. It can be challenging to directly attribute ROI to design thinking, but you can compare ROI before and after the implementation of solutions generated through design thinking.
- Time to Market: Design thinking can often help teams to iterate and arrive at successful solutions more quickly. Therefore, a reduced time to market can be a sign of a successful design thinking process.
- Innovation Metrics: This can include the number of new ideas generated, the number of prototypes developed, and the number of fully implemented prototypes. The prototype death rate be a useful metric. The Innovation Implementation Index is another.
- Qualitative Feedback: This includes user, stakeholder, and team member feedback. While it’s not a quantitative metric, this feedback can provide valuable insight into the success of the design thinking process in meeting needs, fitting with strategic priorities, and perceived outcomes.
- Usability Metrics: This can include error rates, success rates, time to complete a task, and learnability. These metrics can help measure how user-friendly and efficient the solution is.
- Employee Satisfaction and Engagement: Design thinking not only affects the products or services but also the employees involved in the process. Increased employee engagement, improved collaboration, and satisfaction can be indicative of a successful design thinking culture.
The choice of measurements will depend on the goals of your design thinking process and the nature of your project. It’s crucial to define what success looks like at the beginning of your project so you know what to measure along the way.
The return on investment for design can be tremendous if we evaluate what we do and how we do it. It can also be a source of innovation in itself and a support a culture of learning by focusing our curiosity and providing a feedback mechanism to channel what we ask into what we do. When we take a design-driven approach to evaluation we can make these benefits even more pronounced.
The best designers will soon be those that evaluate, too.
Connect design thinking with strong evaluation and you have the foundation for a culture of innovation. If you want to create that within your team or organization, let’s grab a coffee and chat about how I can help you.
Image credits: Jo Szczepanska on Unsplash and Nigel Hoare on Unsplash
Thank you for this interesting and critical discussion about design thinking and evaluation. I think there is a vital concept missing when determining metrics and their meaning—any evaluation starts with who asked and why as is most eloquently discussed by Michael Patton in his book Utilization Focused Evaluation. In assessing a design thinking process, itself, you are immediately challenged by the idea of measuring the process: a contained exercise with a start and end resulting in a defined output/outcome and not just the output/outcome. Most of the metrics involving the customer satisfaction or quality of the end product are not truly connected to the design process itself. Many a great output was painful to create—see any of the literature on the early days at PIXAR. To me, the highest value to be gained from an evaluation of the design thinking process is focused on those involved in the process itself or in a related role in the internal organization as the targeted users with a defined use such as: determining high value skill sets, resource gaps, increased employee engagement, shortening recovery time.
Therefore, evaluation questions pertaining to the design thinking process would focus on the relationships between the designers, actual design, and the design execution not necessarily the output/outcome. The design thinking process can produce outputs/outcomes deemed failures which is a key principle in innovation but how the designers respond next is a reflection of a successful design thinking process: the idea of doing it again is embraced not feared. Essentially, the evaluation metrics or questions uncover or document conditions for producing high functioning design teams balanced with end results expected by the organization.
What conditions facilitated the process itself? What roles were needed within the team or what skills where important for a single designer? Assessing if team members demonstrate continual learning resulting in a design thinking process that is not static could also be an indicator of a successful design thinking process.
At a conference I attended, a presenter used an interesting data collection method to assess the internal processes during a project. Each day each team member put an emoji on their calendar reflecting their feelings about what happened that day. At the end of the project, the ratings from all the calendars were compiled without attribution then aligned with different parts of the project. So, if during week two, there were mostly sad faces what was happening in the project. Those connections then could be explored to look at potential causal factors or other relationships and impacts to the process
But to end with where I started, it is all dependent on defining the specific thing to be evaluated, who will use the results for what purpose. Cameron, I love your website and I have learned a lot from your insightful musings. Looking forward to more.
Anna, Thank you for your comments and remarks about your experience with different styles of evaluations. So often the most elegant evaluations are those that are simple. Emojis are so popular because they translate well across context and focus on feelings, not just thoughts. I agree with you that any evaluation must be context-bound and designed for that context — the who, what, where, when, and why’s and for what purpose and when we forget those basics, we get into trouble. I appreciate you taking the time to share your thoughts and kind words.