Category: evaluation

evaluationsocial innovation

E-Valuing Design and Innovation

5B247043-1215-4F90-A737-D35D9274E695

Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change? 

What is the value of design?

If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.

Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.

‘Valuating’ design, evaluating innovation

On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:

Program design is both a verb and a noun.

It is the process that organizations use to develop a program.  Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.

A program design is also the plan of action that results from that process.  Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.

One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.

The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.

We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.

658beggar_KeepCoinsChange

A bigger perspective on value creation

The process of placing or determining value* of a program is about looking at three things:

1. The plan (the program design);

2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);

3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).

Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.

A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.

One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.

Designing opportunities to learn more

Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.

In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.

Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.

Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’ Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.

Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.

Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.

When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.

*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.

Image credit: author

businessdesign thinkingevaluationinnovation

Designing for the horizon

6405553723_ac7f814b90_o

‘New’ is rarely sitting directly in front of us, but on the horizon; something we need to go to or is coming towards us and is waiting to be received. In both cases this innovation (doing something new for benefit) requires different action rather than repeated action with new expectations. 

I’ve spent time in different employment settings where my full-time job was to work on innovation, whether that was through research, teaching or some kind of service contribution, yet found the opportunities to truly innovate relatively rare. The reason is that these institutions were not designed for innovation, at least in the current sense of it. They were well established and drew upon the practices of the past to shape the future, rather than shape the future by design. Through many occasions even when there was a chance to build something new from the ground up — a unit, centre, department, division, school  — the choice was to replicate the old and hope for something new.

This isn’t how innovation happens. One of those who understood this better than most is Peter Drucker. A simple search through some of the myriad quotes attributed to him will find wisdom pertaining to commitment, work ethic, and management that is unparalleled. Included in that wisdom is the simple phrase:

If you want something new, you have to stop doing something old

Or, as the quote often attributed to Henry Ford suggests:

Do what you’ve always done and you will get what you’ve always got

Design: An intrapreneurial imperative

In each case throughout my career I’ve chosen to leave and pursue opportunities that are more nimble and allow me to really innovation with people on a scale appropriate to the challenge or task. Yet, this choice to be nimble often comes at the cost of scale, which is why I work as a consultant to support larger organizations change by bringing the agility of innovation to those institutions not well set up for it (and help them to set up for it).

There are many situations where outside support through someone like a consultant is not only wise, but maybe the only option for an organization needing fresh insight and expertise. Yet, there are many situations where this is not the case. In his highly readable book The Designful Company, Marty Neumeier addresses the need for creating a culture of non-stop innovation and ways to go about it.

At the core of this approach is design. As he states:

If you want to innovate, you gotta design

 

Design is not about making things look pretty, it’s about making things stand out or differentiating them from others in the marketplace**. (Good) Design is what makes these differentiated products worthy of consideration or adoption because they meet a need, satisfy a desire or fill a gap somewhere. As Neumeier adds:

Design contains the skills to identify possible futures, invent exciting products, , build bridges to customers, crack wicked problems, and more.

When posed this way, one is left asking: Why is everyone not trained as a designer? Or put another way: why aren’t most organizations talking about design? 

Being mindful of the new

This brings us back to Peter Drucker and another pearl of wisdom gained from observing the modern organization and the habits that take place within it:

Follow effective action with quiet reflection. From the quiet reflection will come even more effective action.

This was certainly not something that was a part of the institutional culture of the organizations I was a part of and it’s not part of many of the organizations I worked with. The rush to do, to take action, is rarely complemented with reflection because it is inaction. While I might have created habits of reflective practice in my work as an individual, that is not sufficient to create change in an organization without some form of collective or at least shared reflection.

To test this out, ask yourself the following questions of the workplace you are a part of:

  • Do you hold regular, timely gatherings for workers to share ideas, discuss challenges and explore possibilities without an explicit outcome attached to the agenda?
  • Is reflective practice part of the job requirements of individuals and teams where there is an expectation that it is done and there are performance review activities attached to such activity? Or is this a ‘nice to have’ or ‘if time permits’ kind of activity?
  • Are members of an organization provided time and resources to deliver on any expectations of reflective practice both individually or collectively?
  • Are other agenda items like administrative, corporate affairs, news, or ’emergencies’ regularly influencing and intruding upon the agendas of gatherings such as strategic planning meetings or reflection sessions?
  • Is evaluation a part of the process of reflection? Do you make time to review evaluation findings and reflect on their meaning for the organization?
  • Do members of the organization — from top to bottom — know what reflection is or means in the context of their work?
  • Does the word mindfulness come into conversations in the organization at any time in an official capacity?

Designing mindfulness

If innovation means design and effective action requires reflection it can be surmised that designing mindfulness into the organization can yield considerable benefits. Certain times of year, whether New Years Day (like today’s posting date), Thanksgiving (in Canada & US), birthdays, anniversaries, religious holidays or even the end of the fiscal year or quarter, can prompt some reflection.

Designing mindfulness into an organization requires taking that same spirit that comes from these events and making them regular. This means protecting time to be mindful (just as we usually take time off for holidays), including regular practices into the workflow much like we do with other activities, and including data (evaluation evidence) to support that reflection and potentially guide some of that reflection. Sensemaking time to bring that together in a group is also key as is the potential to use design as a tool for foresight and envisioning new futures.

To this last point I conclude with another quote attributed to Professor Drucker:

The best way to predict your future is to create it

As you begin this new year, new quarter, new day consider how you can design your future and create the space in your organization — big or small — to reflect and act more mindfully and effectively.

12920057704_2e9695665b_b

** This could be a marketplace of products, services, ideas, attention or commitment.

Photo credit: Hovering on the Horizon by the NASA Earth Observatory used under Creative Commons Licence via Flickr

design thinkingevaluationsocial innovation

The Ecology of Innovation: Part 1 – Ideas

Innovation Ecology

Innovation Ecology

There is a tendency when looking at innovation to focus on the end result of a process of creation rather than as one node in a larger body of activity, yet expanding our frame of reference to see these connections innovation starts to look look much more like an ecosystem than a simple outcome. This first in a series examines innovation ecology from the place of ideas.

Ideas are the kindling that fuels innovation. Without good ideas, bold ideas, and ideas that have the potential to move our thinking, actions and products further we are left with the status quo: what is, rather than what might be.

What is often missed in the discussion of ideas is the backstory and connections between thoughts that lead to the ideas that may eventually lead to something that becomes an innovation*. This inattention to (or unawareness of) this back story might contribute to reasons why many think they are uncreative or believe they have low innovation potential. Drawing greater attention to these connections and framing that as part of an ecosystem has the potential to not only free people from the tyrrany of having to create the best ideas, but also exposes the wealth of knowledge generated in pursuit of those ideas.

Drawing Connections

Connections  is the title of a book by science historian James Burke that draws on his successful British science documentary series that first aired in the 1970’s and was later recreated in the mid 1990’s. The premise of the book and series is to show how ideas link to one another and build on one another to yield the scientific insights that we see. By viewing ideas in a collective realm, we see how they can and do connect, weaving together a tapestry of knowledge that is far more than the sum of the parts within it.

Too often we see the celebration of innovation as focused on the parts – the products. This is the iPhone, the One World Futbol, the waterless toilet, the intermittent windshield wiper or a process like the Lean system for quality improvement or the use of checklists in medical care. These are the ideas that survive.

The challenge with this perspective on ideas is that it appears to be all-or-nothing: either the idea is good and works or it is not and doesn’t work.

This latter means of thinking imposes judgement on the end result, yet is strangely at odds with innovation itself. It is akin to judging flour, salt, sugar or butter to be bad because a baker’s cake didn’t turn out. Ideas are the building blocks – the DNA if you will — of innovations. But, like DNA (and RNA), it is only in their ability to connect, form and multiply that we really see innovation yield true benefit at a system level. Just like the bakers’ ingredient list, ideas can serve different purposes to different effects in different contexts and the key is knowing (or uncovering) what that looks like and learning what effect it has.

From ideas to ecologies

An alternative to the idea-as-product perspective is to view it as part of a wider system. This takes James Burke’s connections to a new level and actually views ideas as part of a symbiont, interactive, dynamic set of relations. Just like the above example of DNA, there is a lot of perceived ‘junk’ in the collection that may have no obvious benefit, yet by its existence enables the non-junk to reveal and produce its value.

This biological analogy can extend further to the realm of systems. The term ecosystem embodies this thinking:

ecosystem |ˈekōˌsistəm, ˈēkō-| {noun}

Ecology

a biological community of interacting organisms and their physical environment.

• (in general use) a complex network or interconnected system: Silicon Valley’s entrepreneurial ecosystem | the entire ecosystem of movie and video production will eventually go digital.

Within this perspective on biological systems, is the concept of ecology:

ecology |iˈkäləjē| {noun}

1 the branch of biology that deals with the relations of organisms to one another and to their physical surroundings.

2 (also Ecology) the political movement that seeks to protect the environment, especially from pollution.

What is interesting about the definitions above, drawn from the Oxford English Dictionary, is that they focus on biology, the discipline where it first was explored and studied. The definition of biology used in the Wikipedia entry on the topic states:

Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.[1]

Biologists do not look at ecosystems and decide which animals, plants, environments are good or bad and proceed to discount them, rather they look at what each brings to the whole, their role and their relationships. Biology is not without evaluative elements as judgement is still applied to these ‘parts’ of the system as there are certain species, environments and contexts that are more or less beneficial for certain goals or actors/agents in the system than others, but judgement is always contextualized.

Designing for better idea ecologies

Contextual learning is part of sustainable innovation. Unlike natural systems, which function according to hidden rules (“the laws of nature”) that govern ecosystems, human systems are created and intentional; designed. Many of these systems are designed poorly or with little thought to their implications, but because they are designed we can re-design them. Our political systems, social systems, living environments and workplaces are all examples of human systems. Even families are designed systems given the social roles, hierarchies, expectations and membership ‘rules’ that they each follow.

If humans create designed systems we can do the same for the innovation systems we form. By viewing ideas within an ecosystem as part of an innovation ecosystem we offer an opportunity to do more with what we create. Rather than lead a social Darwinian push towards the ‘best’ ideas, an idea ecosystem creates the space for ideas to be repurposed, built upon and revised over time. Thus, our brainstorming doesn’t have to end with whatever we come up with at the end (and may hate anyway), rather it is ongoing.

This commitment to ongoing ideation, sensemaking and innovation (and the knowledge translation, exchange and integration) is what distinguishes a true innovation ecosystem from a good idea done well. In future posts, we’ll look at this concept of the ecosystem in more detail.

Brainstorming Folly

Brainstorming Folly

Tips and Tricks:

Consider recording your ideas and revisiting them over time. Scheduling a brief moment to revisit your notebooks and content periodically and regularly keeps ideas alive. Consider the effort in brainstorming and bringing people together as investments that can yield returns over time, not just a single moment. Shared Evernote notebooks, Google Docs, building (searchable) libraries of artifacts or regular revisiting of project memos can be a simple, low-cost and high-yield way to draw on your collective intellectual investment over time.

* An innovation for this purpose is a new idea realized for benefit.

Image Credits: Top: Evolving ecology of the book (mindmap) by Artefatica used under Creative Commons License from Flickr.

Bottom: Brainstorming Ideas from Tom Fishburne (Marketoonist) used under commercial license.

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author

behaviour changeevaluationinnovation

Beyond the Big and New: Innovating on Quality

The newest, biggest, shiny thing

The newest, biggest, shiny thing

Innovation is a term commonly associated with ‘new’ and sparkly products and things, but that quest for the bigger and more shiny in what we do often obscures the true innovative potential within systems. Rethinking what we mean by innovation and considering the role that quality plays might help us determine whether bigger and glossy is just that, instead of necessarily better. 

Einstein’s oft paraphrased line about new thinking and problems goes something like this:

“Problems cannot be solved with the same mind set that created them.”

In complex conditions, this quest for novel thinking is not just ideal, it’s necessary. However genuine this quest for the new idea and new thing draws heavily upon widely shared human fears of the unknown it is also framed within a context of Western values. Not all cultures revere the new over what came before it, but in the Western world the ‘new’ has become celebrated and none more so than through the word innovation.

Innovation: What’s in a word?

Innovation web

Innovation web

A look at some of the terms associated with innovation (above) finds an emphasis on discovery and design, which can imply a positive sense of wonder and control to those with Westernized sentiments. Indeed, a survey of the landscape of actors, services and products seeking to make positive change in the world finds innovation everywhere and an almost obsessive quest for ideas. What is less attended to is providing a space for these ideas to take flight and answer meaningful, not trivial, questions in an impactful way.

Going Digital Strategy by Tom Fishburne

Going Digital Strategy by Tom Fishburne

I recently attended an event with Zaid Hassan speaking on Social Labs and his new book on the subject. While there was much interest in the way a social lab engages citizens in generating new ideas I was pleased to hear Hassan emphasize that the energy of a successful lab must be directed at the implementation of ideas into practice over just generating new ideas.

Another key point of discussion was the overall challenge of going deep into something and the costs of doing that. This last point got me thinking about the way we frame innovation and what is privileged in that discussion

Innovating beyond the new

Sometimes innovation takes place not only in building new products and services, but in thinking new thoughts, and seeing new possibilities.

Thinking new thoughts requires asking new or better questions of what is happening. As for seeing new possibilities, that might mean looking at things long forgotten and past practices to inform new practice, not just coming up with something novel. Ideas are sexy and fun and generate excitement, yet it is the realization of these ideas that matter more than anything.

The ‘new’ idea might actually be an old one, rethought and re-purposed. The reality for politicians and funders is often confined to equating ‘new’ things with action and work. Yet, re-purposing knowledge and products, re-thinking, or simply developing ideas in an evolutionary manner are harder to see and less sexier to sell to donors and voters.

When new means better, not necessarily bigger

Much of the social innovation sector is consumed or obsessed with scale. The Stanford Social Innovation Review, the key journal for the burgeoning field, is filled with articles, events and blog posts that emphasize the need for scaling social innovations. Scaling, in nearly all of these contexts, means taking an idea to more places to serve more people. The idea of taking a constructive idea that, when realized, benefits as many as possible is hard to argue against, however such a goal is predicated highly upon a number of assumptions about the intervention, population of focus, context, resource allocations and political and social acceptability of what is proposed that are often not aligned.

What is bothersome is that there is nowhere near the concern for quality in these discussions. In public health we often speak of intervention fidelity, intensity, duration, reach, fit and outcome, particularly with those initiatives that have a social component. In this context, there is a real threat in some circumstances of low quality information lest someone make a poorly informed or misleading choice.  We don’t seem to see that same care and attention to other areas of social innovation. Sometimes that is because there is no absolute level of quality to judge or the benefits to greater quality are imperceptibly low.

But I suspect that this is a case of not asking the question about quality in the first place. Apple under Steve Jobs was famous for creating “insanely great” products and using a specific language to back that up. We don’t talk like that in social innovation and I wonder what would happen if we did.

Would we pay more attention to showing impact than just talking about it?

Would we design more with people than for them?

Would we be bolder in our experiments?

Would we be less quick to use knee-jerk dictums around scale and speak of depth of experience and real change?

Would we put resources into evaluation, sensemaking and knowledge translation so we could adequately share our learning with others?

Would we be less hyperbolic and sexy?

Might we be more relevant to more people, more often and (ironically, perhaps) scale social innovation beyond measure?

 

 

Marketoonist Cartoon used under license.

 

 

 

complexityemergenceevaluationinnovation

Do you value (social) innovation?

Do You Value the Box or What's In It?

Do You Value the Box or What’s In It?

The term evaluation has at its root the term value and to evaluate innovation means to assess the value that it brings in its product or process of development. It’s remarkable how much discourse there is on the topic of innovation that is devoid of discussion of evaluation, which begs the question: Do we value innovation in the first place?

The question posed above is not a cheeky one. The question about whether or not we value innovation gets at the heart of our insatiable quest for all things innovative.

Historical trends

A look at Google N-gram data for book citations provides a historical picture of how common a particular word shows up in books published since 1880. Running the terms innovation, social innovation and evaluation through the N-gram software finds some curious trends. A look at graphs below finds that the term innovation spiked after the Second World War. A closer look reveals a second major spike in the mid-1990s onward, which is likely due to the rise of the Internet.

In both cases, technology played a big role in shaping the interest in innovation and its discussion. The rise of the cold war in the 1950’s and the Internet both presented new problems to find and the need for such problems to be addressed.

Screenshot 2014-03-05 13.29.28 Screenshot 2014-03-05 13.29.54 Screenshot 2014-03-05 13.30.13

Below that is social innovation, a newer concept (although not as new as many think), which showed a peak in citations in the 1960’s and 70s, which corresponds with the U.S. civil rights movements, expansion of social service fields like social work and community mental health, anti-nuclear organizing, and the environmental movement.  This rise for two decades is followed by a sharp decline until the early 2000’s when things began to increase again.

Evaluation however, saw the most sustained increase over the 20th century of the three terms, yet has been in decline ever since 1982. Most notable is the even sharper decline when both innovation and social innovation spiked.

Keeping in mind that this is not causal or even linked data, it is still worth asking: What’s going on? 

The value of evaluation

Let’s look at what the heart of evaluation is all about: value. The Oxford English Dictionary defines value as:

value |ˈvalyo͞o|

noun

1 the regard that something is held to deserve; the importance, worth, or usefulness of something: your support is of great value.

• the material or monetary worth of something: prints seldom rise in value | equipment is included up to a total value of $500.

• the worth of something compared to the price paid or asked for it: at $12.50 the book is a good value.

2 (values) a person’s principles or standards of behavior; one’s judgment of what is important in life: they internalize their parents’ rules and values.

verb (values, valuing, valued) [ with obj. ]

1 estimate the monetary worth of (something): his estate was valued at $45,000.

2 consider (someone or something) to be important or beneficial; have a high opinion of: she had come to value her privacy and independence.

Innovation is a buzzword. It is hard to find many organizations who do not see themselves as innovative or use the term to describe themselves in some part of their mission, vision or strategic planning documents. A search on bookseller Amazon.com finds more than 63,000 titles organized under “innovation”.

So it seems we like to talk about innovation a great deal, we just don’t like to talk about what it actually does for us (at least in the same measure). Perhaps, if we did this we might have to confront what designer Charles Eames said:

Innovate as a last resort. More horrors are done in the name of innovation than any other.

At the same time I would like to draw inspiration from another of Eames’ quotes:

Most people aren’t trained to want to face the process of re-understanding a subject they already know. One must obtain not just literacy, but deep involvement and re-understanding.

Valuing innovation

Innovation is easier to say than to do and, as Eames suggested, is a last resort when the conventional doesn’t work. For those working in social innovation the “conventional” might not even exist as it deals with the new, the unexpected, the emergent and the complex. It is perhaps not surprising that the book Getting to Maybe: How the World is Changed is co-authored by an evaluator: Michael Quinn Patton.

While Patton has been prolific in advancing the concept of developmental evaluation, the term hasn’t caught on in widespread practice. A look through the social innovation literature finds little mention of developmental evaluation or even evaluation at all, lending support for the extrapolation made above. In my recent post on Zaid Hassan’s book on social laboratories one of my critique points was that there was much discussion about how these social labs “work” with relatively little mention of the evidence to support and clarify the statement.

One hypothesis is that evaluation can be seen a ‘buzzkill’ to the buzzword. It’s much easier, and certainly more fun, to claim you’re changing the world than to do the interrogation of one’s activities to find that the change isn’t as big or profound as one expected. Documentation of change isn’t perceived as fun as making change, although I would argue that one is fuel for the other.

Another hypothesis is that there is much mis-understanding about what evaluation is with (anecdotally) many social innovators thinking that its all about numbers and math and that it misses the essence of the human connections that support what social innovation is all about.

A third hypothesis is that there isn’t the evaluative thinking embedded in our discourse on change, innovation, and social movements that is aligned with the nature of systems and thus, people are stuck with models of evaluation that simply don’t fit the context of what they’re doing and therefore add little of the value that evaluation is meant to reveal.

If we value something, we need to articulate what that means if we want others to follow and value the same thing. That means going beyond lofty, motherhood statements that feel good — community building, relationships, social impact, “making a difference” — and articulating what they really mean. In doing so, we are better position to do more of what works well, change what doesn’t, and create the culture of inquiry and curiosity that links our aspirations to our outcomes.

It means valuing what we say we value.

(As a small plug: want to learn more about this? The Evaluation for Social Innovation workshop takes this idea further and gives you ways to value, evaluate and communicate value. March 20, 2014 in Toronto).

 

 

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.