Tag: innovation

behaviour changecomplexitydesign thinkingevaluation

Innovation, Change and The Leopard

Innovation is a misunderstood and often misrepresented concept that can provoke fear, indifference, resentment, confusion, or irrational exuberance. To understand the reasons why we can’t ignore innovation we need to look no further than the sage advice in The Leopard, a story of change.

It’s been said that the only true constant is change. Funny that something so constant and pervasive — change — invokes such strong reactions from people. It’s partly why innovation can be such a contentious term. Whether we like it or not, the dynamics of change in our world are forcing us to recognize that innovation is not a luxury, it’s more than just a means to competitive advantage, it’s increasingly about survival.

New Canadian research (PDF) looking at citizens attitudes toward innovation and their perception of it suggests there is much to be done to understand what survival and innovation mean for our collective wellbeing. Before getting to that, let’s first define innovation.

There are many definitions of innovation (see here, here, and here for some), but let’s keep it simple:

Innovation is doing something new to generate value

Innovation is effectively a means to create change. Design is the discipline and practice of how we create change intentionally. While change is often thought of something that takes us from one state to another, it is also something that can help us preserve what we have when everything else is changing around us. To help understand this, let’s look at a lesson from The Leopard.

Change Lessons from The Leopard

One of my favourite quotes comes from the Italian novel The Leopard by Giuseppe Tomasi di Lampedusa where one character (a young nephew speaking to his aristocratic uncle who seeks to preserve the family’s status) says to another:

If want things to stay as they are, things will have to change

The Leopard (translated from Italian)

I’ve written about this quote before, looking at the psychology of organizations and the folly of fads in innovation and design thinking. It’s about change, but really it’s about innovation and survival.

While individual humans are pretty resilient in the face of changing conditions, organizations are not so easily adaptive. Our family, friends, neighbours, and maybe governments will look after us if things get really bad (for a while, at least), but there are few looking after organizations.

For organizations – non-profit, profit-seeking, and governmental alike — innovation is the means to improve, to adapt, or maintain the status quo. It’s no longer a luxury — it’s about survival. Just as a real leopard has spots that are part of it’s form to survive, so too must we consider examining what kind of survival mechanisms we can build into our forms. That means: design.

Survival, by Design

A recent survey of 2000 Canadians by the Rideau Hall Foundation looked at Canadians attitudes toward innovation and whether Canada was creating a culture of innovation.


A culture of innovation is one where the general public has shared values and beliefs that innovation is essential for collective well-being

David Johnson, Governor General of Canada

There are many flaws with this study. There’s a reason I defined what I meant by innovation at the beginning of this post; it’s because there’s so much confusion surrounding the term, its meaning, and use. I suspect that same confusion entered this study. Nevertheless, there are some insights that are worth exploring that may transcend the context of the study (Canada).

One of these is the tendency among young people (age 18-25) to view innovation as something more likely to be generated from individuals. There is also differences in perceptions between men and women about who is best suited for innovation. What is shared is this perception of innovation as being ‘out there’, which is part of our problem.

If we view innovation as something needed to survive — to change or to keep things as they are– then we need to shift the thinking from innovation being something novel, technology-dependent, and fitting the fetishistic perspective that dominates corporate discourse. That requires an intentional, skillful approach to designing for change (and survival). It means:

From surviving to thriving

The interconnection of social, technological, and environmental systems has created an unprecedented level of complexity for human beings. This complexity means our ability to learn from the past is muddled, just as our ability to see and predict what’s coming is limited. The feedback cycles that we need to make decisions — including the choice to remain still — are getting shorter as a result and require new, different, and better data than we could rely on before.

Survival is necessary, but not very inspiring. Innovation can generate a means to invigorate an organization and provide renewed purpose for those working in them. By connecting what it is that you do, to what you want, to what is happening (now and in the near-present) on a regular basis and viewing your programs and services more like gardens than mechanical devices, we have the chance to design with and for complexity rather than compete against it.

For a brilliant example of this metaphor of the garden in creative work and complexity see Brian Eno’s talk with The Edge Foundation. Eno speaks about the need to get past this idea of seeing the entire whole and enjoying the creative space that takes place within the boundaries you can see. It’s not about simple reactivity, it’s a proactive, yet humble approach to designing things (in his case, music) that allows him to work with complexity, not against it. It allows him to thrive.

More on this later.

Working with complexity means designing your work for complexity. It means being like The Leopard (the book, but maybe the cat, too) and be willing to embrace change as a way of living, not just to survive, but to thrive.

Photos by Geran de Klerk on Unsplash , Adaivorukamuthan on Unsplash,
and Alexandre St-Louis on Unsplash

evaluationsocial innovation

Innovation Evaluation and the Burden of Proof

Innovating is about doing something new to produce value and this introduces substantial challenges for strategy and evaluation. Without knowing what you can expect, it’s hard to know what you have…or where you’re going.

Innovation is one of those terms that is used a lot without as much attention to what it means in specific terms. This lack of specificity might be fine for casual conversation, but the risks (and benefits) to innovation are great and require a level of detail that has a seriousness to it.

When you’re evaluating a new product or service, the risk of developing the wrong thing or leaving something out is substantial. False confidence in success could have detrimental effects on an expected return on investment. Lack of understanding of the effects of a product or service could lead to harm.

Evaluation and strategy in the context of innovation takes on new meaning because the implications are so great.

Scoping investments

The investment in innovation requires considerable tolerance for risk, which is why many countries provide incentives to support it. In the case of many community-serving organizations, innovation is often seen as imperative given the complexity of many of our social issues.

Although Research & Development (R & D) is commonly associated with innovation, often the ‘R’ part of this work is focused on the development of the product or service, not the evaluation. Evaluation focuses our attention on the product or service as it is used, even if that is in an early, restricted setting. Evaluation provides insight into the actual use characteristics, and can be useful in supporting program design, but only if there is investment in it.

While it is difficult enough to balance all of the demands and costs associated with innovating, investing in evaluation early on can yield substantial benefits later on as have been highlighted elsewhere. However, it is precisely because many of the benefits are later in the development cycle that it is so easy to put off investing in evaluation early.

Present thinking, future investment

Innovation is principally about the future. However, as illustrated in other places, it is also about present value as a byproduct of the process of creating new goods, services, and realizing ideas. By thinking this way, program developers have the opportunity to create additional value now by integrating evaluation more fully into the present offering.

Yet it’s in the future where the biggest payoff comes. Just as you might see the glamour and glory that comes with being an innovator like the image above, that kind of success is often based on a simple set of outcomes (e.g., sales, profit, etc..). Those kind of metrics are easy to celebrate, but often disguise what the real value of something is.

Whether it’s in the form of social benefits, additional discoveries, or the role of your product or service as a catalyst for other change, the focus on the end product loses the story told along the way. By allowing evaluation into the process early, the opportunity to tell that story differently and better makes for a more interesting — and beneficial read. That is an innovation story worth writing.

Photo by Austin Distel on Unsplash

evaluationinnovation

The Mindset for Design-Driven Evaluation

What distinguishes design-driven evaluation from other types of utilization-focused evaluation or innovation development is that it views the evaluative act as part of a service offering. It’s not a shift in method, but in mindset.

Evaluation is the innovator’s secret advantage. Any sustained attempt to innovate is driven by good data and systems to make sense of this data. Some systems are better than other and sometimes the data collected is not particularly great, but if you look at any organization that consistently develops new products and services that are useful and attractive and you’ll see some commitment to evaluation.

Innovation involves producing something that adds new value and evaluation is the means of assessing what that value is. What design-driven evaluation does is take that process one step further and views the process of data collection, sensemaking, decision-making, and action as part of the value chain of a service or product line.

It’s not a new way of evaluating things, it’s a new mindset for how we understand the utility of evaluation and its role in supporting sustained innovation and its culture within an organization. It does this by viewing evaluation as product on its own and as a service to the organization.

In both cases, the way we approach this kind of evaluation is the way we would approach designing a product and a service. It’s both.

Evaluation as a product

What does an evaluation of something produce? What is the product?

The simple, traditional answer is that an evaluation generates material for presentations or reports based on the merit, worth, and significance of what is being evaluated. A utilization-focused or developmental evaluation might suggest that the product is data that can be used to make decisions and learn.

Design-driven evaluation can do both, but extends our understanding of what the product is. The evaluation itself — the process of paying attention to what is happening in the development and use of a product or service, selecting what is most useful and meaningful, and collecting data on those activities and outcomes — has distinctive value on its own.

Viewed as a product, an evaluation can serve as a part of the innovation itself. Consider the tools that we use to generate many of our innovations from Sharpie markers and Post-it Notes to our whiteboards and wheely chairs to Figma or Adobe Illustrator software to the Macbook Pro or HP Envy PC that we use to type on. The best tools are designed to serve the creative process. There are many markers, computers, software packages, and platforms, but the ones we choose are the ones that serve a purpose well for what we need and what we enjoy (and that includes factoring in constraints) — they are well-designed. Why should an evaluation — a tool in the service of innovation — be any different?

Just like the reams of sticky notes that we generate with ideas serve as a product of the process of designing something new (innovating), so can an evaluation serve this same function.

These products are not just functional, they are stable and often invoke a positive emotional appeal to them (e.g. they look good, feel good, help you to feel good, etc.). Exceptional products do this while being sustainable, accessible, (usually) affordable, and culturally and environmentally sensitive the environments in which they are deployed. The best products combine it all.

Evaluations can do this. A design-driven evaluation generates something that is not only useful and used, but attractive. It invites conversation, use and showcases what is done in the service of creating an innovation by design.

The principles of good product design — designing for use, attraction, interaction, and satisfaction — are applied to an evaluation using this approach. This means selecting methods and tools that fit this function and aesthetic (and doesn’t divorce the two). It means treating the evaluation design and what it generates (e.g., data) as a product.

Evaluation as a service

The other role of a design-driven evaluation is to treat it as a service and thus, design it as such.

Service design is a distinct area of practice within the field of design that focuses on creating optimal experiences through service.

Designers Marc Stickdorn and Jakob Schneider suggest that service design should be guided by five basic principles :

  1. User-centered, through understanding the user by doing qualitative research;
  2. Co-creative, by involving all relevant stakeholders in the design process;
  3. Sequencing, by partitioning a complex service into separate processes;
  4. Evidencing, by visualizing service experiences and making them tangible;
  5. Holistic, by considering touchpoints in a network of interactions and users.

If we consider these principles in the scope of an evaluation, what we’ll see is something very different than just a report or presentation. This approach to designing evaluation as a service means taking a more concerted effort to identify present and potential future uses of an evaluation and understanding the user at the outset and designing for their needs, abilities, and preferences.

It also involves considering how evaluation can integrate into or complement existing service offerings. For innovators, it positions evaluation as a means of making innovation happen as part of the process and making that process better and more useful.

This is beyond A/B testing or forms of ‘testing’ innovations to positioning evaluation as a service to those who are innovating. In developmental evaluations, this means designing evaluation activities — from the data collection through to the synthesis, sensemaking, application, and re-design efforts of a program — as a service to the innovation itself.

Designing a mindset

Design-driven evaluation requires a mindset of an innovator and designer with the discipline of an evaluator. It is a way of approaching evaluation differently and goes beyond simple use to true service. This is not an approach that is needed for every evaluation either. But if you want to generate better use of evaluation results, contribute to better innovations and decision making, generate real learning (not learning artifacts), then designing a mindset for evaluation that views it alongside the care and attention that goes into all the other products and services we engage in matters a great deal.

If we want better, more useful evaluations (and their designs) we need to think and act like designers.

Photo credits: Heading: Cameron Norman
Second Photo by Mark Rabe on Unsplash
Third Photo by Carli Jeen on Unsplash

evaluationinnovation

Developmental Evaluation is not for you

Developmental evaluation is a powerful tool to support innovation, engaging communities, and foster deep learning. While it might be growing in popularity, increasingly in demand, and a key difference-maker for social and technological innovators it might also not be for you.

Developmental evaluation (DE) is an approach to evaluation that is designed to support innovation and gather data to make sense of things in a complex environment. It is a powerful tool full of promise and many traps and has become increasingly popular in the social, finance, and health sectors. Maybe it’s for you. Maybe it’s not.

Chances are, it’s not.

If you are looking to force an outcome, DE is not for you.

DE might be for you if you are confused, nervous, a little excited, and curious about what it is that you’re doing, how you can make it more sustainable and useful, and interested in working with complexity, not fighting against it.

If you are not interested in learning — really, truly learning — skip the DE and try something else. DE is only good for those individuals and organizations that are serious about learning. This might mean struggling with uncertainty, honestly reflecting on past actions (including all the false-starts, non-starts, rough starts, and bad finishes) and envisioning the future and challenging what you belief (and sometimes affirming beliefs, too). A DE prompts you to do all of this and if that’s not your thing, don’t get into DE.

If you know the end of the story with your innovation before you begin, DE is not for you either.

If the status quo is your thing, DE is not.

Therapists see this all the time. They encounter people who say: “I want to change” and then witness them fight, struggle, deny, and abandon efforts to do the work to make the change happen, because it’s far easier to ask for change than it is to do it. This is OK — this struggle is part of being human. But if you are unwilling to do the work, struggle with it, and truly learn from your efforts, DE is not for you.

If you have the best idea in the world and a plan to change the world with it, DE is probably not for you. DE might get to you to re-think parts of your plan or the whole thing. It’s going to make your expected outcomes less expected and gum up the nice, simple, but wrong picture.

If practice makes perfect, DE is not for you. If practice is more of a vocation like medicine or doing meditation — a way of doing the work — then that’s a different story. For a DE practitioner, it’s not about becoming great at something, an improved version of yourself or your organization, or the best in the world. It’s about learning, growing and evolving (see above).

If you think DE is going to make you better as a person. Nope. Just as a 30-year old is not 6-times better than a 5-year old, someone who does DE is no better than they were beforehand. But they may have learned a lot and evolved as an innovator.

If you want something fast, efficient, outcomes-driven, and evidence-based from top-to-bottom don’t even think about DE.

Want to be trendy? Do DE. It’s what the cool kids are doing in evaluation and if being cool is important to you – definitely get into DE. (Unless you don’t like putting in a lot of work to become proficient areas of complexity, social and organizational behaviour, many different aspects of evaluation, and even design).

Lazy? Uncommitted? Allergic to creativity? Undisciplined? Low energy? Have a low tolerance for ambiguity? Then DE is not for you.

If you’re looking for a direct plan, a clear pathway to improvement and betterment, and quantifiable outcomes, DE is not for you.

If innovation has a specific look, feel, ROI, and outcome then you need tools and strategies that will assess all of that – which means you should not engage in DE. DE will only disappoint you. You will be exposed to many things, including possibilities you’d never considered, but they very likely won’t fit your model because, if what you are doing is truly innovative, it’s never really been tried before.

If you are changing the game while playing it, the rules that started won’t apply to what happens when you finish. You can’t start playing chess and wind up playing volleyball and still seek to measure the movements of the Rook, Bishop, or Queen. If you’re not really into game-changing – the kind that’s not about hyperbole and catchphrases — DE is not for you.

If you are short on time, commitment, and resources to bring people together, take time to pause and truly reflect, sit with uncertainty, delight in surprise, exceed your expectations, and sometimes end up disappointed, DE isn’t for you.

If strategy is a plan that you stick to no matter what, then DE is not for you.

If you’ve embraced failure as a mantra or are afraid of “failure” (which to you means not doing everything you set out to do in the manner you set out to it), then DE is certainly not for you. The only way you will fail at DE is the failure to devote attention to learning.

If you view relationships as transactions, rather than as opportunities to grow and transform, DE is most certainly not for you.

Innovation is about discovery. If you wish to work in ways that are aligned with natural development — the kind we see in our children, pets, gardens, communities, and ourselves – you might find yourself discovering a lot and DE can be a big help. If ‘discovery’ is a code-word for re-packaging what you already have or doing what you’ve always done (be honest with yourself), then DE is a big waste of time.

Can’t handle surprises? Run away from DE and use something else.

If you’re looking to just check off a box because you committed to doing that in your corporate plan, then make your life easy and give DE a pass. If you see organizations as living beings and wish to create value for others in a manner that is consistent with this perspective, then DE could be a powerful ally in that process.

DE is becoming popular, but it most certainly not everyone. Maybe not for you, either. Now, you have lots of reasons to show why you should try something else.

If you still think DE is for you after all this, let’s connect — because DE seems to suit us at Cense just fine and we can help it to suit you, too.

Photo by Loren Gu on Unsplash

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author