Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change?Â
What is the value of design?
If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.
Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.
‘Valuating’ design, evaluating innovation
On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:
Program design is both a verb and a noun.
It is the process that organizations use to develop a program. Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.
A program design is also the plan of action that results from that process. Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.
One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.
The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.
We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.
A bigger perspective on value creation
The process of placing or determining value* of a program is about looking at three things:
1. The plan (the program design);
2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);
3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).
Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.
A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.
One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.
Designing opportunities to learn more
Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.
In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.
Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.
Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’Â Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.
Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.
Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.
When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.
*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.
Image credit: author
Comments are closed.