Tag: developmental evaluation

evaluationinnovation

The Mindset for Design-Driven Evaluation

What distinguishes design-driven evaluation from other types of utilization-focused evaluation or innovation development is that it views the evaluative act as part of a service offering. It’s not a shift in method, but in mindset.

Evaluation is the innovator’s secret advantage. Any sustained attempt to innovate is driven by good data and systems to make sense of this data. Some systems are better than other and sometimes the data collected is not particularly great, but if you look at any organization that consistently develops new products and services that are useful and attractive and you’ll see some commitment to evaluation.

Innovation involves producing something that adds new value and evaluation is the means of assessing what that value is. What design-driven evaluation does is take that process one step further and views the process of data collection, sensemaking, decision-making, and action as part of the value chain of a service or product line.

It’s not a new way of evaluating things, it’s a new mindset for how we understand the utility of evaluation and its role in supporting sustained innovation and its culture within an organization. It does this by viewing evaluation as product on its own and as a service to the organization.

In both cases, the way we approach this kind of evaluation is the way we would approach designing a product and a service. It’s both.

Evaluation as a product

What does an evaluation of something produce? What is the product?

The simple, traditional answer is that an evaluation generates material for presentations or reports based on the merit, worth, and significance of what is being evaluated. A utilization-focused or developmental evaluation might suggest that the product is data that can be used to make decisions and learn.

Design-driven evaluation can do both, but extends our understanding of what the product is. The evaluation itself — the process of paying attention to what is happening in the development and use of a product or service, selecting what is most useful and meaningful, and collecting data on those activities and outcomes — has distinctive value on its own.

Viewed as a product, an evaluation can serve as a part of the innovation itself. Consider the tools that we use to generate many of our innovations from Sharpie markers and Post-it Notes to our whiteboards and wheely chairs to Figma or Adobe Illustrator software to the Macbook Pro or HP Envy PC that we use to type on. The best tools are designed to serve the creative process. There are many markers, computers, software packages, and platforms, but the ones we choose are the ones that serve a purpose well for what we need and what we enjoy (and that includes factoring in constraints) — they are well-designed. Why should an evaluation — a tool in the service of innovation — be any different?

Just like the reams of sticky notes that we generate with ideas serve as a product of the process of designing something new (innovating), so can an evaluation serve this same function.

These products are not just functional, they are stable and often invoke a positive emotional appeal to them (e.g. they look good, feel good, help you to feel good, etc.). Exceptional products do this while being sustainable, accessible, (usually) affordable, and culturally and environmentally sensitive the environments in which they are deployed. The best products combine it all.

Evaluations can do this. A design-driven evaluation generates something that is not only useful and used, but attractive. It invites conversation, use and showcases what is done in the service of creating an innovation by design.

The principles of good product design — designing for use, attraction, interaction, and satisfaction — are applied to an evaluation using this approach. This means selecting methods and tools that fit this function and aesthetic (and doesn’t divorce the two). It means treating the evaluation design and what it generates (e.g., data) as a product.

Evaluation as a service

The other role of a design-driven evaluation is to treat it as a service and thus, design it as such.

Service design is a distinct area of practice within the field of design that focuses on creating optimal experiences through service.

Designers Marc Stickdorn and Jakob Schneider suggest that service design should be guided by five basic principles :

  1. User-centered, through understanding the user by doing qualitative research;
  2. Co-creative, by involving all relevant stakeholders in the design process;
  3. Sequencing, by partitioning a complex service into separate processes;
  4. Evidencing, by visualizing service experiences and making them tangible;
  5. Holistic, by considering touchpoints in a network of interactions and users.

If we consider these principles in the scope of an evaluation, what we’ll see is something very different than just a report or presentation. This approach to designing evaluation as a service means taking a more concerted effort to identify present and potential future uses of an evaluation and understanding the user at the outset and designing for their needs, abilities, and preferences.

It also involves considering how evaluation can integrate into or complement existing service offerings. For innovators, it positions evaluation as a means of making innovation happen as part of the process and making that process better and more useful.

This is beyond A/B testing or forms of ‘testing’ innovations to positioning evaluation as a service to those who are innovating. In developmental evaluations, this means designing evaluation activities — from the data collection through to the synthesis, sensemaking, application, and re-design efforts of a program — as a service to the innovation itself.

Designing a mindset

Design-driven evaluation requires a mindset of an innovator and designer with the discipline of an evaluator. It is a way of approaching evaluation differently and goes beyond simple use to true service. This is not an approach that is needed for every evaluation either. But if you want to generate better use of evaluation results, contribute to better innovations and decision making, generate real learning (not learning artifacts), then designing a mindset for evaluation that views it alongside the care and attention that goes into all the other products and services we engage in matters a great deal.

If we want better, more useful evaluations (and their designs) we need to think and act like designers.

Photo credits: Heading: Cameron Norman
Second Photo by Mark Rabe on Unsplash
Third Photo by Carli Jeen on Unsplash

evaluationinnovation

Developmental Evaluation is not for you

Developmental evaluation is a powerful tool to support innovation, engaging communities, and foster deep learning. While it might be growing in popularity, increasingly in demand, and a key difference-maker for social and technological innovators it might also not be for you.

Developmental evaluation (DE) is an approach to evaluation that is designed to support innovation and gather data to make sense of things in a complex environment. It is a powerful tool full of promise and many traps and has become increasingly popular in the social, finance, and health sectors. Maybe it’s for you. Maybe it’s not.

Chances are, it’s not.

If you are looking to force an outcome, DE is not for you.

DE might be for you if you are confused, nervous, a little excited, and curious about what it is that you’re doing, how you can make it more sustainable and useful, and interested in working with complexity, not fighting against it.

If you are not interested in learning — really, truly learning — skip the DE and try something else. DE is only good for those individuals and organizations that are serious about learning. This might mean struggling with uncertainty, honestly reflecting on past actions (including all the false-starts, non-starts, rough starts, and bad finishes) and envisioning the future and challenging what you belief (and sometimes affirming beliefs, too). A DE prompts you to do all of this and if that’s not your thing, don’t get into DE.

If you know the end of the story with your innovation before you begin, DE is not for you either.

If the status quo is your thing, DE is not.

Therapists see this all the time. They encounter people who say: “I want to change” and then witness them fight, struggle, deny, and abandon efforts to do the work to make the change happen, because it’s far easier to ask for change than it is to do it. This is OK — this struggle is part of being human. But if you are unwilling to do the work, struggle with it, and truly learn from your efforts, DE is not for you.

If you have the best idea in the world and a plan to change the world with it, DE is probably not for you. DE might get to you to re-think parts of your plan or the whole thing. It’s going to make your expected outcomes less expected and gum up the nice, simple, but wrong picture.

If practice makes perfect, DE is not for you. If practice is more of a vocation like medicine or doing meditation — a way of doing the work — then that’s a different story. For a DE practitioner, it’s not about becoming great at something, an improved version of yourself or your organization, or the best in the world. It’s about learning, growing and evolving (see above).

If you think DE is going to make you better as a person. Nope. Just as a 30-year old is not 6-times better than a 5-year old, someone who does DE is no better than they were beforehand. But they may have learned a lot and evolved as an innovator.

If you want something fast, efficient, outcomes-driven, and evidence-based from top-to-bottom don’t even think about DE.

Want to be trendy? Do DE. It’s what the cool kids are doing in evaluation and if being cool is important to you – definitely get into DE. (Unless you don’t like putting in a lot of work to become proficient areas of complexity, social and organizational behaviour, many different aspects of evaluation, and even design).

Lazy? Uncommitted? Allergic to creativity? Undisciplined? Low energy? Have a low tolerance for ambiguity? Then DE is not for you.

If you’re looking for a direct plan, a clear pathway to improvement and betterment, and quantifiable outcomes, DE is not for you.

If innovation has a specific look, feel, ROI, and outcome then you need tools and strategies that will assess all of that – which means you should not engage in DE. DE will only disappoint you. You will be exposed to many things, including possibilities you’d never considered, but they very likely won’t fit your model because, if what you are doing is truly innovative, it’s never really been tried before.

If you are changing the game while playing it, the rules that started won’t apply to what happens when you finish. You can’t start playing chess and wind up playing volleyball and still seek to measure the movements of the Rook, Bishop, or Queen. If you’re not really into game-changing – the kind that’s not about hyperbole and catchphrases — DE is not for you.

If you are short on time, commitment, and resources to bring people together, take time to pause and truly reflect, sit with uncertainty, delight in surprise, exceed your expectations, and sometimes end up disappointed, DE isn’t for you.

If strategy is a plan that you stick to no matter what, then DE is not for you.

If you’ve embraced failure as a mantra or are afraid of “failure” (which to you means not doing everything you set out to do in the manner you set out to it), then DE is certainly not for you. The only way you will fail at DE is the failure to devote attention to learning.

If you view relationships as transactions, rather than as opportunities to grow and transform, DE is most certainly not for you.

Innovation is about discovery. If you wish to work in ways that are aligned with natural development — the kind we see in our children, pets, gardens, communities, and ourselves – you might find yourself discovering a lot and DE can be a big help. If ‘discovery’ is a code-word for re-packaging what you already have or doing what you’ve always done (be honest with yourself), then DE is a big waste of time.

Can’t handle surprises? Run away from DE and use something else.

If you’re looking to just check off a box because you committed to doing that in your corporate plan, then make your life easy and give DE a pass. If you see organizations as living beings and wish to create value for others in a manner that is consistent with this perspective, then DE could be a powerful ally in that process.

DE is becoming popular, but it most certainly not everyone. Maybe not for you, either. Now, you have lots of reasons to show why you should try something else.

If you still think DE is for you after all this, let’s connect — because DE seems to suit us at Cense just fine and we can help it to suit you, too.

Photo by Loren Gu on Unsplash

psychology

The Developmental Psychology of Organizations

Organizations start change somewhere

Every living thing has a journey that starts somewhere and ends eventually. Our ability to see this, understand it, and apply what we know about how humans grow and develop (as individuals and organizations) is what helps us determine how this journey unfolds and where it ends up.

The psychology of individuals is a complicated affair that involves understanding a variety of matters from personal and family history, genetics, cultural context, education, and social situating. While all of these contribute to who we are as people, the degree of influence and mix is different from person to person. It means that we are all a product of a collection of forces that combine together in various ways that make understanding how we change a challenge because of this holistic complexity.

For example, some of us might have behaviours and preferences associated with a certain personality type (extroverted and introverted) and find that quality to be relatively stable across the lifespan. While there are times we might exhibit qualities of another type, those are more situational than stable. For those who are more of an ambivert, identification with a particular preference might be more challenging. Whatever investment you place in this kind of personality assessment, what is important is that the stability and consistency of certain characteristics are what largely shapes our identity to others (and ourselves). It’s what makes us ‘us’.

From Individuals to Organizations

It has been argued that organizations exhibit much of the same kind of characteristic habits on their own while providing an aggregation of the characteristics of those within them and leading them to various degrees. Personality theory has been applied to organizational behaviour as a means of understanding how it is that certain actions, activities, habits, and patterns form from within organizations and their implications. This involves taking ideas developed for individuals and applying them to groups and the implications of this are considerable.

If we are to consider organizations similar to humans seriously, it can have significant implications for the way in which we engage in organizational change efforts. Much of the research on organizational change is tied to the development and implementation of a strategy. Strategy, in most conventional applications, is an expression of intent manifest through specific choices of focus and action. This approach rests largely on a cognitive rational model of change (pdf) where information (e.g., data, ‘facts’, perceptions, beliefs, and opinion) guides an assessment of the situation that forms the basis for a plan of action. The idea is that we see and learn things and plan and act according to that knowledge.

Most individual behaviour change models are founded on this approach that has thinking preceding action in a relatively rational, logical manner based on an objective assessment of the facts and evidence (with some emotional contributions here and there to make life interesting). So if we tie organizational change to the similar kind of mechanisms and models that we use to understand individuals, should we not apply similar modes of change facilitation? We do — but its how we do it that might be the problem.

Change Theory to Change Reality

One of the most vexing (and little discussed) issues for behavioural scientists is that the application of the cognitive rational model to personal, organizational, and social change has a rather unimpressive track record. A look at how people change finds that relatively little change comes from rationally reviewing a threat or opportunity and planning out a strategy (nevermind executing the planned strategy as envisioned). Even when the effects are modest, factors such as the match between the person, technique or intervention approach, and the problem being addressed continues to mediate the outcomes.

What happens when our theories and our practices don’t really work? Or at least don’t work as well as we think they do?

The answer — using the very argument that we are looking to disprove — is that we will address the matter as many individuals might: disagreement, resistance, and denial.

The field of organizational decision-making and innovation is littered with case studies that show how, in the face of overwhelming evidence to the contrary, organizations (like many individuals) resist change. Whether it was the speed at which those on the Titanic accepted the fact that their ship would sink after hitting the iceberg (nevermind the perception that the ship was invulnerable, to begin with) or companies who persist with a strategy that doesn’t match with changing times (e.g., Kodak and it’s photographic film business, Sears and its retail model), the inability to see, unwillingness to perceive or accept changing situations has led to major problems.

These problems are a matter of failing to change or adapt. To quote from The Leopard:

If want things to stay as they are, things will have to change

Change is something we need to do even if that is simply to maintain the status quo.

Person-Centred Organizational Change

Erik Eriksen, the Austrian-American psychoanalyst whose work focused on identity formation and development, was among the few to challenge the belief that people’s essential character was immutable and resistant to change. (The dominant view was that thinking and behaviour could change, but not ‘how one was’ as a person). He did, however, acknowledge that our ability to change who we are was not easy and takes a lifetime. This flies in the face of the dominant thinking in Western societies that we can make dramatic changes in an instant.

While talk-shows and popular self-books are filled with stories of dramatic transformation and inspiration about how you can change everything in an instant, the truth is that these cases are outliers (and often exaggerations) or misrepresentations. Much like the artist who ‘breaks out’ and becomes an ‘overnight sensation’ the journey to stardom is usually a long one that follows a Pareto distribution (that is a long, slow climb over time followed by very quick punctuation at the end). What is misread into these success stories is that the rapid change is a factor of a long, protracted build-up.

While there are some things that do follow this pattern much change is also linear and progressive. We see this in the work of another Ericsson: Anders Ericsson. His work is widely cited (and mis-cited) as being behind the ‘10,000 Hour Rule’ that suggests that expertise — a change from an unskilled novice to a skilled expert — is developed over that much time of practice. While the time itself is important, what is often missed in the citation of this work is that the key is on deliberative practice (pdf), which makes all the difference.

If we extrapolate from the work of both Eriksen/Ericsson’s we might develop a model of behaviour change that looks quite different than we have at present. Instead of trying 5-year plans, strategic goals, and inspirational visions of the future, we might be better off delving into an organization’s past, it’s formation, it’s core beliefs and personality, and spend more time looking at what it is already doing than what it seeks to do.

Developmental Organizations

We might then find what it seeks to deliberate on day-in-and-out and emphasize the ways in which to amplify the feedback that helps people learn deliberately and consistently. We might take these lessons — much like those small, tiny adjustments that expert violinists, athletes, and surgeons make to hone their craft — and make them visible and build on them. We would look upon organizations as developing organizations using approaches that fit with them developmentally (e.g., developmental evaluation). We would treat organizations like we would people.

Which is kind of funny because organizations are made of people. That’s some change.

Photo by Stanislav Kondratiev on Unsplash

education & learningevaluation

Learning: The Innovators’ Guaranteed Outcome

Innovation involves bringing something new into the world and that often means a lot of uncertainty with respect to outcomes. Learning is the one outcome that any innovation initiative can promise if the right conditions are put into place. 

Innovation — the act of doing something new to produce value — in human systems is wrought with complications from the standpoint of evaluation given that the outcomes are not always certain, the processes aren’t standardized (or even set), and the relationship between the two are often in an ongoing state of flux. And yet, evaluation is of enormous importance to innovators looking to maximize benefit, minimize harm, and seek solutions that can potentially scale beyond their local implementation. 

Non-profits and social innovators are particularly vexed by evaluation because there is an often unfair expectation that their products, services, and programs make a substantial change to social issues such as poverty, hunger, employment, chronic disease, and the environment (to name a few). These are issues that are large, complex, and for which no actor has complete ownership or control over, yet require some form of action, individually and collectively. 

What is an organization to do or expect? What can they promise to funders, partners, and their stakeholders? Apart from what might be behavioural or organizational outcomes, the one outcome that an innovator can guarantee — if they manage themselves right — is learning

Learning as an Outcome

For learning to take place, there need to be a few things included in any innovation plan. The first is that there needs to be some form of data capture of the activities that are undertaken in the design of the innovation. This is often the first hurdle that many organizations face because designers are notoriously bad at showing their work. Innovators (designers) need to capture what they do and what they produce along the way. This might include false starts, stops, ‘failures’, and half-successes, which are all part of the innovation process. Documenting what happens between idea and creation is critical.

Secondly, there needs to be some mechanism to attribute activities and actions to indicators of progress. Change only can be detected in relation to something else so, in the process of innovation, we need to be able to compare events, processes, activities, and products at different stages. Some of the selection of these indicators might be arbitrary at first, but as time moves along it becomes easier to know whether things like a stop or start are really just ‘pauses’ or whether they really are pivots or changes in direction. 

Learning as organization

Andrew Taylor and Ben Liadsky from Taylor Newberry Consulting recently wrote a great piece on the American Evaluation Association’s AEA 365 blog outlining a simple approach to asking questions about learning outcomes. Writing about their experience working with non-profits and grantmakers, they comment on how evaluation and learning require creating a culture that supports the two in tandem:

Given that organizational culture is the soil into which evaluators hope to plant seeds, it may be important for us to develop a deeper understanding of how learning culture works and what can be done to cultivate it.

What Andrew and Ben speak of is the need to create the environment for which learning can occur at the start. Some of that is stirred by asking some critical questions as they point out in their article. These include identifying whether there are goals for learning in the organization and what kind of time and resources are invested to regularly gathering people together to talk about the work that is done. This is the third big part of evaluating for learning: create the culture for it to thrive. 

Creating Consciousness

It’s often said that learning is a natural as breathing, but if that were true much more would be gained from innovation than there is. Just like breathing, learning can take place passively and can be manipulated or controlled. In both cases, there is a need to create a consciousness around what ‘lessons’ abound. 

Evaluation serves to make the unconscious, conscious. By paying attention — being mindful — of what is taking place and linking that to innovation at the level of the organization (not just the individual) evaluation can be a powerful tool to aid the process of taking new ideas forward. While we cannot always guarantee that a new idea will transform a problem into a solution, we can ensure that we learn in our effort to make change happen. 

The benefit of learning is that it can scale. Many innovations can’t, but learning is something that can readily be added to, built on, and transforms the learner. In many ways, learning is the ultimate outcome. So next time you look to undertake an innovation, make sure to evaluate it and build in the kind of questions that help ensure that, no matter what the risks are, you can assure yourself a positive outcome. 

Image Credit: Rachel on Unsplash

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

behaviour changebusinessdesign thinking

How do we sit with time?

IMG_1114.jpg

Organizational transformation efforts from culture change to developmental evaluation all depend on one ingredient that is rarely discussed: time. How do we sit with this and avoid the trap of aspiring for greatness while failing to give it the time necessary to make change a reality? 

Toolkits are a big hit with those looking to create change. In my years of work with organizations large and small supporting behaviour change, innovation, and community development there are few terms that light up people’s faces than hearing “toolkit”. Usually, that term is mentioned by someone other than me, but it doesn’t stop the palpable excitement at the prospect of having a set of tools that will solve a complex problem.

Toolkits work with simple problems. A hammer works well with nails. Drills are good at making holes. With enough tools and some expertise, you can build a house. Organizational development or social change is a complex challenge where tools don’t have the same the same linear effect. A tool — a facilitation technique, an assessment instrument, a visualization method — can support change-making, but the application and potential outcome of these tools will always be contextual.

Tools and time

My experience has been that people will go to great extents to acquire tools yet put little comparative effort to use them.  A body of psychological research has shown there are differences between goals, the implementation intentions behind them, and actual achievement of those goals. In other words: desiring change, planning and intending to make a change, and actually doing something are different.

Tools are proxies for this issue in many ways: having tools doesn’t mean they either get used or that they actually produce change. Anyone in the fitness industry knows that the numbers between those who try a workout, those who buy a membership to a club, and those who regularly show up to workout are quite different.

Or consider the Japanese term Tsundoku, which loosely translates into the act of acquiring reading materials and letting them pile up in one’s home without reading them.

But tools are stand-ins for something far more important and powerful: time.

The pursuit of tools and their use is often hampered because organizations do not invest in the time to learn, appropriately apply, refine, and sense-make the products that come through these tools.

A (false) artifact of progress

Bookshelf

Consider the book buying or borrowing example above: we calculate the cost of the book when really we ought to price out the time required to read it. Or, in the case of practical non-fiction, the cost to read it and apply the lessons from it.

Yet, consider a shelf filled with books before you providing the appearance of having the knowledge contained within despite any evidence that its contents have been read. This is the same issue with tools: once acquired it’s easier to assume the work is largely done. I’ve seen this firsthand with people doing what the Buddhist phrase decries:

“Do not confuse the finger pointing to the moon for the moon itself”

It’s the same confusion we see between having data or models and the reality they represent.

These things all represent artifacts of progress and a false equation. More books or data or better models do not equal more knowledge. But showing that you have more of something tangible is a seductive proxy. Time has no proxy; that’s the biggest problem.

Time just disappears, is spent, is used, or whatever metaphor you choose to use to express time. Time is about Kairos or Chronos, the sequence of moments or the moments themselves, but in either case, they bear no clear markers.

Creating time markers

There are some simple tricks to create the same accumulation effect in time-focused work — tools often used to support developmental evaluation and design. Innovation is as much about the process as it is the outcome when it comes to marking effort. The temptation is to focus on the products — the innovations themselves — and lose what was generated to get there. Here are some ways to change that.

  1. Timelines. Creating live (regular) recordings of what key activities are being engaged and connecting them together in a timeline is one way to show the journey from idea to innovation. It also provides a sober reminder of the effort and time required to go through the various design cycles toward generating a viable prototype.
  2. Evolutionary Staging. Document the prototypes created through photographs, video, or even showcasing versions (in the case of a service or policy where the visual element isn’t as prominent). This is akin to the March of Progress image used to show human evolution. By capturing these things and noting the time and timing of what is generated, you create an artifact that shows the time that was invested and what was produced from that investment. It’s a way to honour the effort put toward innovation.
  3. Quotas & Time Targets. I’m usually reluctant to prescribe a specific amount of time one should spend on reflection and innovation-related sensemaking, but it’s evident from the literature that goals, targets, and quotas work as effective motivators for some people. If you generate a realistic set of targets for thoughtful work, this can be something to aspire to and use to drive activity. By tracking the time invested in sensemaking, reflection, and design you better can account for what was done, but also create the marker that you can point to that makes time seem more tangible.

These are three ways to make time visible although it’s important to remember that the purpose isn’t to just accumulate time but to actually sit with it.

All the tricks and tools won’t bring the benefit of what time can offer to an organization willing to invest in it, mindfully. Except, perhaps, a clock.

Try these out with some simple tasks. Another is to treat time like any other resource: budget it. Set aside the time in a calendar by booking key reflective activities in just as you would anything else. To do this, and to keep to it, requires leadership and the organizational supports necessary to ensure that learning can take place. Consider what is keeping you from taking or making the time to learn, share those thoughts with your peers, and then consider how you might re-design what you do and how you do it to support that learning.

Take time for that, and you’re on your way to something better.

 

If you’re interested in learning more about how to do this practically, using data, and designing the conditions to support innovation, contact me. This is the kind of stuff that I do. 

 

 

 

 

 

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author