Category: evaluation

evaluationsocial innovation

Flipping the Social Impact Finger

7150802251_e1c559d402_k

Look around and one will notice a lot of talk about social enterprise and social impact. Look closer and you’ll find a lot more of the former and far less of the latter. 

There’s a Buddhist-inspired phrase that I find myself reflecting on often when traveling in the social innovation/entrepreneurship/enterprise/impact sphere:

Do not confuse the finger pointing to the moon for the moon itself

As terms like social enterprise and entrepreneurship, social innovation, social laboratories and social impact (which I’ll lump together as [social] for expediency of writing) become better known and written about its easy to caught up in the excitement and proclaiming its success in changing the world. Indeed, we are seeing a real shift in not only what is being done, but a mental shift in what can be perceived to be done among communities that never saw opportunities to advance before.

However exciting this is, there is what I see as a growing tendency to lose the forest amid the trees by focusing on the growth of [social] and less on the impact part of that collection of terms. In other words, there’s a sense that lots of talk and activity in [social] is translating to social impact. Maybe, but how do we know?

Investment and ROI in change

As I’ve written before using the same guiding phrase cited above, there is a great tendency to confuse conversation about something with the very thing that is being talked about in social impact. For all of the attention paid to the amount of ventures and the amount of venture capital raised to support new initiatives across the social innovation spectrum in recent years, precious little change has been witnessed in the evaluations made available of these projects.

As one government official working in this sector recently told me:

We tend to run out of steam after (innovations) get launched and lose focus, forgetting to evaluate what kind of impact and both intended and unintended consequences come with that investment

As we celebrate the investment in new ventures, track the launch of new start-ups, and document the number of people working in the [social] sector we can mistake that for impact. To be sure, having people working in a sector is a sign of jobs, but the question of whether they are temporary, suitably paying, satisfactory, or sustainable are the kind of questions that evaluators might ask and remain largely unanswered.

The principal ROI of [social] is social benefit. That benefit comes in the form of improved products and services, better economic conditions for more people, and a healthier planet and wellbeing for the population of humans on it in different measures. These aren’t theoretical benefits, they need to be real ones and the only way we will know if we achieve anything approximating this is through evaluation.

Crashing, but not wrecking the party

Evaluation needs to crash the party, but it need not kill the mood. A latent fear among many in [social] is likely that, should we invest so much energy, enthusiasm, money and talent on [social] and find that it doesn’t yield the benefits we expect or need a fickle populace of investors, governments and the public will abandon the sector. While there will always be trend-hunters who will pursue the latest ‘flavour of the month’, [social] is not that. It is here to stay.

The focus on evaluation however will determine the speed, scope and shape of its development. Without showing real impact and learning from those initiatives that produce positive benefit (or do not) we will substantially limit [social] and the celebratory parties that we now have at the launch of a new initiative, a featured post on a mainstream site, or a new book will become fewer and farther between.

Photo credit: Moonrise by James Niland used under Creative Commons licence via Flickr. Thanks for sharing your art, James.

education & learningevaluation

Reflections said, not done

27189389172_cb3f195829_k

 

Reflective practice is the cornerstone of developmental evaluation and organizational learning and yet is one of the least discussed (and poorly supported) aspects of these processes. It’s time to reflect a little on reflection itself. 

The term reflective practice was popularized by the work of Donald Schön in his book The Reflective Practitioner, although the concept of reflecting while doing things was discussed by Aristotle and serves as the foundation for what we now call praxis. Nonetheless, what made reflective practice as a formal term different from others was that it spoke to a deliberative process of reflection that was designed to meet specific developmental goals and capacities. While many professionals had been doing this, Schön created a framework for understanding how it could be done — and why it was important — in professional settings as a matter of enhancing learning and improving innovation potential.

From individual learners to learning organizations

As the book title suggests, the focus of Schön’s work was on the practitioner her/himself. By cultivating a focus, a mindset and a skill set in looking at practice-in-context Schön (and those that have built on his work) suggest that professionals can enhance their capacity to perform and learning as they go through a series of habits and regular practices by critically inquiring about their work as they work.

This approach has many similarities to mindfulness in action or organizational mindfulness, contemplative inquiry, and engaged scholarship among others. But, aside from organizational mindfulness, these aforementioned approaches are designed principally to support individuals learning about and reflecting about their work.

There’s little question that paying attention and reflecting on what is being done has value for someone seeking to improve the quality of their work and its potential impact, but it’s not enough, at least in practice (even if it does in theory). And the evidence can be found in the astonishing absence of examples of sustained change initiatives supported by reflective practice and, more particularly, developmental evaluation, which is an approach for bringing reflection to bear on the way we evolve programs over time. This is not a criticism of reflective practice or developmental evaluation per se, but the problems that many have in implementing it in a sustained manner. From professional experience, this comes down largely to the matter of what is required to actually do reflective practice or any in practice. 

For developmental evaluation it means connecting what it can do to what people actually will do.

Same theories, different practices

The flaw in all of this is that the implementation of developmental evaluation is often predicated on implicit assumptions about learning, how it’s done, who’s responsible for it, and what it’s intended to achieve. The review of the founding works of developmental evaluation (DE) by Patton and others point to practices and questions that that can support DE work.

While enormously useful, they make the (reasonable) assumption that organizations are in a position to adopt them. What is worth considering for any organization looking to build DE into their work is: are we really ready to reflect in action? Do we do it now? And if we don’t, what makes us think we’ll do it in the future? 

In my practice, I continually meet organizations that want to use DE, be innovative, become adaptive, learn more deeply from what they do and yet when we speak about what they currently do to support this in everyday practice few examples are presented. The reason is largely due to time and the priorities and organization of our practice in relation to time. Time — and its felt sense of scarcity for many of us — is one of the substantive limits and reflective practice requires time.

The other is space. Are there places for reflection on issues that matter that are accessible? These twin examples have been touched on in other posts, but they speak to the limits of DE in affecting change without the ability to build reflection into practice. Thus, the theory of DE is sound, but the practice of it is tied to the ability to use time and space to support the necessary reflection and sensemaking to make it work.

The architecture of reflection

If we are to derive the benefits from DE and innovate more fully, reflective practice is critical for without one we can’t have the other. This means designing in reflective space and time into our organizations ahead of undertaking a developmental evaluation. This invites questions about where and how we work in space (physical and virtual) and how we spend our time.

To architect reflection into our practice, consider some questions or areas of focus:

  • Are there spaces for quiet contemplation free of stimulation available to you? This might mean a screen-free environment, a quiet space and one that is away from traffic.
  • Is there organizational support for ‘unplugging’ in daily practice? This would mean turning off email, phones and other electronic devices’ notifications to support focused attention on something. And, within that space, are there encouragements to use that quiet time to focus on looking at and thinking about evaluation data and reflecting on it?
  • Are there spaces and times for these practices to be shared and done collectively or in small groups?
  • If we are not granting ourselves time to do this, what are we spending the time doing and does it add more value than what we can gain from learning?
  • Sometimes off-site trips and scheduled days away from an office are helpful by giving people other spaces to reflect and work.
  • Can you (will you?) build in — structurally — to scheduled work times and flows committed times to reflect-in-action and ensure that this is done at regular intervals, not periodic ones?
  • If our current spaces are insufficient to support reflection, are we prepared to redesign them or even move?

These are starting questions and hard ones to ask, but they can mean the difference between reflection in theory and reflection in practice which is the difference between innovating, adapting and thriving in practice, not just theory or aspiration.

 

 

 

 

evaluationsocial innovation

Benchmarking change

The quest for excellence within social programs relies on knowing what excellence means and how programs compare against others. Benchmarks can enable us to compare one program to another if we have quality comparators and an evaluation culture to generate them – something we currently lack. 

5797496800_7876624709_b

A benchmark is something used by surveyors to provide a means of holding a levelling rod to determine some consistency in elevation measurement of a particular place that could be compared over time. A benchmark represents a fixed point for measurement to allow comparisons over time.

The term benchmark is often used in evaluation as a means of providing comparison between programs or practices, often taking one well-understood and high performing program as the ‘benchmark’ to which others are compared. Benchmarks in evaluation can be the standard to which other measures compare.

In a 2010 article for the World Bank (PDF), evaluators Azevedo, Newman and Pungilupp, articulate the value of benchmarking and provide examples for how it contributes to the understanding of both absolute and relative performance of development programs. Writing about the need for benchmarking, the authors conclude:

In most benchmarking exercises, it is useful to consider not only the nature of the changes in the indicator of interest but also the level. Focusing only on the relative performance in the change can cause the researcher to be overly optimistic. A district, state or country may be advancing comparatively rapidly, but it may have very far to go. Focusing only on the relative performance on the level can cause the researcher to be overly pessimistic, as it may not be sufficiently sensitive to pick up recent changes in efforts to improve.

Compared to what?

One of the challenges with benchmarking exercises is finding a comparator. This is easier for programs operating with relatively simple program systems and structures and less so for more complex ones. For example, in the service sector wait times are a common benchmark. In the province of Ontario in Canada, the government provides regularly updated wait times for Emergency Room visits via a website. In the case of healthcare, benchmarks are used in multiple ways. There is a target that is used as the benchmark, although, depending on the condition, this target might be on a combination of aspiration, evidence, as well as what the health system believes is reasonable, what the public demands (or expects) and what the hospital desires.

Part of the problem with benchmarks set in this manner is that they are easy to manipulate and thus raise the question of whether they are true benchmarks in the first place or just goals.

If I want to set a personal benchmark for good dietary behaviour of eating three meals a day, I might find myself performing exceptionally well as I’ve managed to do this nearly every day within the last three months. If the benchmark is consuming 2790 calories as is recommended for someone of my age, sex, activity levels, fitness goals and such that’s different. Add on that, within that range of calories, the aim is to have about 50% of those come from carbohydrates, 30% from fat and 20% from protein, and we a very different set of issues to consider when contemplating how performance relates to a standard.

One reason we can benchmark diet targets is that the data set we have to set that benchmark is enormous. Tools like MyFitnessPal and others operate to use benchmarks to provide personal data to its users to allow them to do fitness tracking using these exact benchmarks that are gleaned from having 10’s of thousands of users and hundreds of scientific articles and reports on diet and exercise from the past 50 years. From this it’s possible to generate reasonably appropriate recommendations for a specific age group and sex.

These benchmarks are also possible because we have internationally standardized the term calorie. We have further internationally recognized, but slightly less precise, measures for what it means to be a certain age and sex. Activity level gets a little more fuzzy, but we still have benchmarks for it. As the cluster of activities that define fitness and diet goals get clustered together we start to realize that it is a jumble of highly precise and somewhat loosely defined benchmarks.

The bigger challenge comes when we don’t have a scientifically validated standard or even a clear sense of what is being compared and that is what we have with social innovation.

Creating an evaluation culture within social innovation

Social innovation has a variety of definitions, however the common thread of these is that its about a social program aimed at address social problems using ideas, tools, policies and practices that differ from the status quo. Given the complexity of the environments that many social programs are operating, it’s safe to assume that social innovation** is happening all over the world because the contexts are so varied. The irony is that many in this sector are not learning from one another as much as they could, further complicating any initiative to build benchmarks for social programs.

Some groups like the Social Innovation Exchange (SIX) are trying to change that. However, they and others like them, face an uphill battle. Part of the reason is that social innovation has not established a culture of evaluation within it. There remains little in the way of common language, frameworks, or spaces to share and distribute knowledge about programs — both in description and evaluation — in a manner that is transparent and accessible to others.

Competition for funding, the desire to paint programs in a positive light, lack of expertise, not enough resources available for dissemination and translation, absence of a dedicated space for sharing results, and distrust or isolation from academia among certain sectors are some reasons that might contribute to this. For example, the Stanford Social Innovation Review is among the few venues dedicated to scholarship in social innovation aimed at a wide audience. It’s also a venue focused largely on international development and what I might call ‘big’ social innovation: the kind of works that attract large philanthropic resources. There’s lot of other types of social innovation and they don’t all fit into the model that SSIR promotes.

From my experiences, many small organizations or initiatives struggle to fund evaluation efforts sufficiently, let alone the dissemination of the work once it’s finished. Without good quality evaluations and the means to share their results — whether or not they cast a program in positive light or not — it’s difficult to build a culture where the sector can learn from one another. Without a culture of evaluation, we also don’t get the volume of data and access to comparators — appropriate comparators, not just the only things we can find — to develop true, useful benchmarks.

Culture’s feast on strategy

Building on the adage attributed to Peter Drucker that culture eats strategy for breakfast (or lunch) it might be time that we use that feasting to generate some energy for change. If the strategy is to be more evidence based, to learn more about what is happening in the social sector, and to compare across programs to aid that learning there needs to be a culture shift.

This requires some acknowledgement that evaluation, a disciplined means of providing structured feedback and monitoring of programs, is not something adjunct to social innovation, but a key part of it. This is not just in the sense that evaluation provides some of the raw materials (data) to make informed choices that can shape strategy, but that it is as much a part of the raw material for social change as enthusiasm, creativity, focus, and dissatisfaction with the status quo on any particular condition.

We are seeing a culture of shared ownership and collective impact forming, now it’s time to take that further and shape a culture of evaluation that builds on this so we can truly start sharing, building capacity and developing the real benchmarks to show how well social innovation is performing. In doing so, we make social innovation more respectable, more transparent, more comparable and more impactful.

Only by knowing what we are doing and have done can we really sense just how far we can go.

** For this article, I’m using the term social innovation broadly, which might encompass many types of social service programs, government or policy initiatives, and social entrepreneurship ventures that might not always be considered social innovation.

Photo credit: Redwood Benchmark by Hitchster used under Creative Commons License from Flickr.

About the author: Cameron Norman is the Principal of Cense Research + Design and works at assisting organizations and networks in supporting learning and innovation in human services through design, program evaluation, behavioural science and system thinking. He is based in Toronto, Canada.

evaluationsocial innovation

E-Valuing Design and Innovation

5B247043-1215-4F90-A737-D35D9274E695

Design and innovation are often regarded as good things (when done well) even if a pause might find little to explain what those things might be. Without a sense of what design produces, what innovation looks like in practice, and an understanding of the journey to the destination are we delivering false praise, hope and failing to deliver real sustainable change? 

What is the value of design?

If we are claiming to produce new and valued things (innovation) then we need to be able to show what is new, how (and whether) it’s valued (and by whom), and potentially what prompted that valuation in the first place. If we acknowledge that design is the process of consciously, intentionally creating those valued things — the discipline of innovation — then understanding its value is paramount.

Given the prominence of design and innovation in the business and social sector landscape these days one might guess that we have a pretty good sense of what the value of design is for so many to be interested in the topic. If you did guess that, you’d have guessed incorrectly.

‘Valuating’ design, evaluating innovation

On the topic of program design, current president of the American Evaluation Association, John Gargani, writes:

Program design is both a verb and a noun.

It is the process that organizations use to develop a program.  Ideally, the process is collaborative, iterative, and tentative—stakeholders work together to repeat, review, and refine a program until they believe it will consistently achieve its purpose.

A program design is also the plan of action that results from that process.  Ideally, the plan is developed to the point that others can implement the program in the same way and consistently achieve its purpose.

One of the challenges with many social programs is that it isn’t clear what the purpose of the program is in the first place. Or rather, the purpose and the activities might not be well-aligned. One example is the rise of ‘kindness meters‘, repurposing of old coin parking meters to be used to collect money for certain causes. I love the idea of offering a pro-social means of getting small change out of my pocket and having it go to a good cause, yet some have taken the concept further and suggested it could be a way to redirect money to the homeless and thus reduce the number of panhandlers on the street as a result. A recent article in Macleans Magazine profiled this strategy including its critics.

The biggest criticism of them all is that there is a very weak theory of change to suggest that meters and their funds will get people out of homelessness. Further, there is much we don’t know about this strategy like: 1) how was this developed?, 2) was it prototyped and where?, 3) what iterations were performed — and is this just the first?, 4) who’s needs was this designed to address? and 5) what needs to happen next with this design? This is an innovative idea to be sure, but the question is whether its a beneficial one or note.

We don’t know and what evaluation can do is provide the answers and help ensure that an innovative idea like this is supported in its development to determine whether it ought to stay, go, be transformed and what we can learn from the entire process. Design without evaluation produces products, design with evaluation produces change.

658beggar_KeepCoinsChange

A bigger perspective on value creation

The process of placing or determining value* of a program is about looking at three things:

1. The plan (the program design);

2. The implementation of that plan (the realization of the design on paper, in prototype form and in the world);

3. The products resulting from the implementation of the plan (the lessons learned throughout the process; the products generated from the implementation of the plan; and the impact of the plan on matters of concern, both intended and otherwise).

Prominent areas of design such as industrial, interior, fashion, or software design are principally focused on an end product. Most people aren’t concerned about the various lamps their interior designer didn’t choose in planning their new living space if they are satisfied with the one they did.

A look at the process of design — the problem finding, framing and solving aspects that comprise the heart of design practice — finds that the end product is actually the last of a long line of sub-products that is produced and that, if the designers are paying attention and reflecting on their work, they are learning a great deal along the way. That learning and those sub-products matter greatly for social programs innovating and operating in human systems. This may be the real impact of the programs themselves, not the products.

One reason this is important is that many of our program designs don’t actually work as expected, at least not at first. Indeed, a look at innovation in general finds that about 70% of the attempts at institutional-level innovation fail to produce the desired outcome. So we ought to expect that things won’t work the first time. Yet, many funders and leaders place extraordinary burdens on project teams to get it right the first time. Without an evaluative framework to operate from, and the means to make sense of the data an evaluation produces, not only will these programs fail to achieve desired outcomes, but they will fail to learn and lose the very essence of what it means to (socially) innovate. It is in these lessons and the integration of them into programs that much of the value of a program is seen.

Designing opportunities to learn more

Design has a glorious track record of accountability for its products in terms of satisfying its clients’ desires, but not its process. Some might think that’s a good thing, but in the area of innovation that can be problematic, particularly where there is a need to draw on failure — unsuccessful designs — as part of the process.

In truly sustainable innovation, design and evaluation are intertwined. Creative development of a product or service requires evaluation to determine whether that product or service does what it says it does. This is of particular importance in contexts where the product or service may not have a clear objective or have multiple possible objectives. Many social programs are true experiments to see what might happen as a response to doing nothing. The ‘kindness meters’ might be such a program.

Further, there is an ethical obligation to look at the outcomes of a program lest it create more problems than it solves or simply exacerbate existing ones.

Evaluation without design can result in feedback that isn’t appropriate, integrated into future developments / iterations or decontextualized. Evaluation also ensures that the work that goes into a design is captured and understood in context — irrespective of whether the resulting product was a true ‘innovation’ Another reason is that, particularly in social roles, the resulting product or service is not an ‘either / or’ proposition. There may many elements of a ‘failed design’ that can be useful and incorporated into the final successful product, yet if viewed as a dichotomous ‘success’ or ‘failure’, we risk losing much useful knowledge.

Further, great discovery is predicated on incremental shifts in thinking, developed in a non-linear fashion. This means that it’s fundamentally problematic to ascribe a value of ‘success’ or ‘failure’ on something from the outset. In social settings where ideas are integrated, interpreted and reworked the moment they are introduced, the true impact of an innovation may take a longer view to determine and, even then, only partly.

Much of this depends on what the purpose of innovation is. Is it the journey or is it the destination? In social innovation, it is fundamentally both. Indeed, it is also predicated on a level of praxis — knowing and doing — that is what shapes the ‘success’ in a social innovation.

When design and evaluation are excluded from each other, both are lesser for it. This year’s American Evaluation Association conference is focused boldly on the matter of design. While much of the conference will be focused on program design, the emphasis is still on the relationship between what we create and the way we assess value of that creation. The conference will provide perhaps the largest forum yet on discussing the value of evaluation for design and that, in itself, provides much value on its own.

*Evaluation is about determining the value, merit and worth of a program. I’ve only focused on the value aspects of this triad, although each aspect deserves consideration when assessing design.

Image credit: author

businessdesign thinkingevaluationinnovation

Designing for the horizon

6405553723_ac7f814b90_o

‘New’ is rarely sitting directly in front of us, but on the horizon; something we need to go to or is coming towards us and is waiting to be received. In both cases this innovation (doing something new for benefit) requires different action rather than repeated action with new expectations. 

I’ve spent time in different employment settings where my full-time job was to work on innovation, whether that was through research, teaching or some kind of service contribution, yet found the opportunities to truly innovate relatively rare. The reason is that these institutions were not designed for innovation, at least in the current sense of it. They were well established and drew upon the practices of the past to shape the future, rather than shape the future by design. Through many occasions even when there was a chance to build something new from the ground up — a unit, centre, department, division, school  — the choice was to replicate the old and hope for something new.

This isn’t how innovation happens. One of those who understood this better than most is Peter Drucker. A simple search through some of the myriad quotes attributed to him will find wisdom pertaining to commitment, work ethic, and management that is unparalleled. Included in that wisdom is the simple phrase:

If you want something new, you have to stop doing something old

Or, as the quote often attributed to Henry Ford suggests:

Do what you’ve always done and you will get what you’ve always got

Design: An intrapreneurial imperative

In each case throughout my career I’ve chosen to leave and pursue opportunities that are more nimble and allow me to really innovation with people on a scale appropriate to the challenge or task. Yet, this choice to be nimble often comes at the cost of scale, which is why I work as a consultant to support larger organizations change by bringing the agility of innovation to those institutions not well set up for it (and help them to set up for it).

There are many situations where outside support through someone like a consultant is not only wise, but maybe the only option for an organization needing fresh insight and expertise. Yet, there are many situations where this is not the case. In his highly readable book The Designful Company, Marty Neumeier addresses the need for creating a culture of non-stop innovation and ways to go about it.

At the core of this approach is design. As he states:

If you want to innovate, you gotta design

 

Design is not about making things look pretty, it’s about making things stand out or differentiating them from others in the marketplace**. (Good) Design is what makes these differentiated products worthy of consideration or adoption because they meet a need, satisfy a desire or fill a gap somewhere. As Neumeier adds:

Design contains the skills to identify possible futures, invent exciting products, , build bridges to customers, crack wicked problems, and more.

When posed this way, one is left asking: Why is everyone not trained as a designer? Or put another way: why aren’t most organizations talking about design? 

Being mindful of the new

This brings us back to Peter Drucker and another pearl of wisdom gained from observing the modern organization and the habits that take place within it:

Follow effective action with quiet reflection. From the quiet reflection will come even more effective action.

This was certainly not something that was a part of the institutional culture of the organizations I was a part of and it’s not part of many of the organizations I worked with. The rush to do, to take action, is rarely complemented with reflection because it is inaction. While I might have created habits of reflective practice in my work as an individual, that is not sufficient to create change in an organization without some form of collective or at least shared reflection.

To test this out, ask yourself the following questions of the workplace you are a part of:

  • Do you hold regular, timely gatherings for workers to share ideas, discuss challenges and explore possibilities without an explicit outcome attached to the agenda?
  • Is reflective practice part of the job requirements of individuals and teams where there is an expectation that it is done and there are performance review activities attached to such activity? Or is this a ‘nice to have’ or ‘if time permits’ kind of activity?
  • Are members of an organization provided time and resources to deliver on any expectations of reflective practice both individually or collectively?
  • Are other agenda items like administrative, corporate affairs, news, or ’emergencies’ regularly influencing and intruding upon the agendas of gatherings such as strategic planning meetings or reflection sessions?
  • Is evaluation a part of the process of reflection? Do you make time to review evaluation findings and reflect on their meaning for the organization?
  • Do members of the organization — from top to bottom — know what reflection is or means in the context of their work?
  • Does the word mindfulness come into conversations in the organization at any time in an official capacity?

Designing mindfulness

If innovation means design and effective action requires reflection it can be surmised that designing mindfulness into the organization can yield considerable benefits. Certain times of year, whether New Years Day (like today’s posting date), Thanksgiving (in Canada & US), birthdays, anniversaries, religious holidays or even the end of the fiscal year or quarter, can prompt some reflection.

Designing mindfulness into an organization requires taking that same spirit that comes from these events and making them regular. This means protecting time to be mindful (just as we usually take time off for holidays), including regular practices into the workflow much like we do with other activities, and including data (evaluation evidence) to support that reflection and potentially guide some of that reflection. Sensemaking time to bring that together in a group is also key as is the potential to use design as a tool for foresight and envisioning new futures.

To this last point I conclude with another quote attributed to Professor Drucker:

The best way to predict your future is to create it

As you begin this new year, new quarter, new day consider how you can design your future and create the space in your organization — big or small — to reflect and act more mindfully and effectively.

12920057704_2e9695665b_b

** This could be a marketplace of products, services, ideas, attention or commitment.

Photo credit: Hovering on the Horizon by the NASA Earth Observatory used under Creative Commons Licence via Flickr

design thinkingevaluationsocial innovation

The Ecology of Innovation: Part 1 – Ideas

Innovation Ecology

Innovation Ecology

There is a tendency when looking at innovation to focus on the end result of a process of creation rather than as one node in a larger body of activity, yet expanding our frame of reference to see these connections innovation starts to look look much more like an ecosystem than a simple outcome. This first in a series examines innovation ecology from the place of ideas.

Ideas are the kindling that fuels innovation. Without good ideas, bold ideas, and ideas that have the potential to move our thinking, actions and products further we are left with the status quo: what is, rather than what might be.

What is often missed in the discussion of ideas is the backstory and connections between thoughts that lead to the ideas that may eventually lead to something that becomes an innovation*. This inattention to (or unawareness of) this back story might contribute to reasons why many think they are uncreative or believe they have low innovation potential. Drawing greater attention to these connections and framing that as part of an ecosystem has the potential to not only free people from the tyrrany of having to create the best ideas, but also exposes the wealth of knowledge generated in pursuit of those ideas.

Drawing Connections

Connections  is the title of a book by science historian James Burke that draws on his successful British science documentary series that first aired in the 1970’s and was later recreated in the mid 1990’s. The premise of the book and series is to show how ideas link to one another and build on one another to yield the scientific insights that we see. By viewing ideas in a collective realm, we see how they can and do connect, weaving together a tapestry of knowledge that is far more than the sum of the parts within it.

Too often we see the celebration of innovation as focused on the parts – the products. This is the iPhone, the One World Futbol, the waterless toilet, the intermittent windshield wiper or a process like the Lean system for quality improvement or the use of checklists in medical care. These are the ideas that survive.

The challenge with this perspective on ideas is that it appears to be all-or-nothing: either the idea is good and works or it is not and doesn’t work.

This latter means of thinking imposes judgement on the end result, yet is strangely at odds with innovation itself. It is akin to judging flour, salt, sugar or butter to be bad because a baker’s cake didn’t turn out. Ideas are the building blocks – the DNA if you will — of innovations. But, like DNA (and RNA), it is only in their ability to connect, form and multiply that we really see innovation yield true benefit at a system level. Just like the bakers’ ingredient list, ideas can serve different purposes to different effects in different contexts and the key is knowing (or uncovering) what that looks like and learning what effect it has.

From ideas to ecologies

An alternative to the idea-as-product perspective is to view it as part of a wider system. This takes James Burke’s connections to a new level and actually views ideas as part of a symbiont, interactive, dynamic set of relations. Just like the above example of DNA, there is a lot of perceived ‘junk’ in the collection that may have no obvious benefit, yet by its existence enables the non-junk to reveal and produce its value.

This biological analogy can extend further to the realm of systems. The term ecosystem embodies this thinking:

ecosystem |ˈekōˌsistəm, ˈēkō-| {noun}

Ecology

a biological community of interacting organisms and their physical environment.

• (in general use) a complex network or interconnected system: Silicon Valley’s entrepreneurial ecosystem | the entire ecosystem of movie and video production will eventually go digital.

Within this perspective on biological systems, is the concept of ecology:

ecology |iˈkäləjē| {noun}

1 the branch of biology that deals with the relations of organisms to one another and to their physical surroundings.

2 (also Ecology) the political movement that seeks to protect the environment, especially from pollution.

What is interesting about the definitions above, drawn from the Oxford English Dictionary, is that they focus on biology, the discipline where it first was explored and studied. The definition of biology used in the Wikipedia entry on the topic states:

Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.[1]

Biologists do not look at ecosystems and decide which animals, plants, environments are good or bad and proceed to discount them, rather they look at what each brings to the whole, their role and their relationships. Biology is not without evaluative elements as judgement is still applied to these ‘parts’ of the system as there are certain species, environments and contexts that are more or less beneficial for certain goals or actors/agents in the system than others, but judgement is always contextualized.

Designing for better idea ecologies

Contextual learning is part of sustainable innovation. Unlike natural systems, which function according to hidden rules (“the laws of nature”) that govern ecosystems, human systems are created and intentional; designed. Many of these systems are designed poorly or with little thought to their implications, but because they are designed we can re-design them. Our political systems, social systems, living environments and workplaces are all examples of human systems. Even families are designed systems given the social roles, hierarchies, expectations and membership ‘rules’ that they each follow.

If humans create designed systems we can do the same for the innovation systems we form. By viewing ideas within an ecosystem as part of an innovation ecosystem we offer an opportunity to do more with what we create. Rather than lead a social Darwinian push towards the ‘best’ ideas, an idea ecosystem creates the space for ideas to be repurposed, built upon and revised over time. Thus, our brainstorming doesn’t have to end with whatever we come up with at the end (and may hate anyway), rather it is ongoing.

This commitment to ongoing ideation, sensemaking and innovation (and the knowledge translation, exchange and integration) is what distinguishes a true innovation ecosystem from a good idea done well. In future posts, we’ll look at this concept of the ecosystem in more detail.

Brainstorming Folly

Brainstorming Folly

Tips and Tricks:

Consider recording your ideas and revisiting them over time. Scheduling a brief moment to revisit your notebooks and content periodically and regularly keeps ideas alive. Consider the effort in brainstorming and bringing people together as investments that can yield returns over time, not just a single moment. Shared Evernote notebooks, Google Docs, building (searchable) libraries of artifacts or regular revisiting of project memos can be a simple, low-cost and high-yield way to draw on your collective intellectual investment over time.

* An innovation for this purpose is a new idea realized for benefit.

Image Credits: Top: Evolving ecology of the book (mindmap) by Artefatica used under Creative Commons License from Flickr.

Bottom: Brainstorming Ideas from Tom Fishburne (Marketoonist) used under commercial license.

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author